arthas常用命令
以下是arthas的常用命令和使用场景
watch
使用场景
(1)监控类方法的调用
# 跟踪methodA方法,当有被调用时,输出入参(params)、返回值(returnObj)、异常(throwExp)、方法所属实例(target),只跟踪五次,输出结果最多3层嵌套
watch org.a.b.c.Abc methodA '{params,returnObj,throwExp,target}' -n 5 -x 3
trace
使用场景
(1)查看某个方法内部的执行过程,包含各个方法的处理时间
# chakan
[arthas@107281]$ trace org.apache.spark.scheduler.DAGScheduler handleTaskCompletion
Press Q or Ctrl+C to abort.
Affect(class count: 1 , method count: 1) cost in 834 ms, listenerId: 7
`---ts=2025-08-10 22:08:53;thread_name=dag-scheduler-event-loop;id=75;is_daemon=true;priority=5;TCCL=org.apache.spark.util.MutableURLClassLoader@2b97cc1f
`---[5.809311ms] org.apache.spark.scheduler.DAGScheduler:handleTaskCompletion()
+---[0.43% 0.025006ms ] org.apache.spark.scheduler.CompletionEvent:task() #1740
+---[0.14% 0.007985ms ] org.apache.spark.scheduler.Task:stageId() #1741
+---[0.12% 0.007194ms ]
……
org.apache.spark.scheduler.DAGScheduler:taskScheduler() #1820
+---[0.62% 0.036026ms ] org.apache.spark.scheduler.DAGScheduler:shouldInterruptTaskThread() #1822
+---[1.38% 0.080118ms ] org.apache.spark.scheduler.TaskScheduler:killAllTaskAttempts() #1823
+---[0.08% 0.004548ms ] org.apache.spark.scheduler.ActiveJob:jobId() #1829
+---[0.08% 0.004589ms ] org.apache.spark.util.Clock:getTimeMillis() #1829
+---[0.16% 0.009007ms ] org.apache.spark.scheduler.SparkListenerJobEnd:<init>() #1829
+---[0.22% 0.012813ms ] org.apache.spark.scheduler.LiveListenerBus:post() #1829
+---[0.12% 0.006773ms ] org.apache.spark.scheduler.ActiveJob:listener() #1835
+---[0.07% 0.004098ms ] org.apache.spark.scheduler.ResultTask:outputId() #1835
+---[0.09% 0.005349ms ] org.apache.spark.scheduler.CompletionEvent:result() #1835
`---[0.54% 0.031178ms ] org.apache.spark.scheduler.JobListener:taskSucceeded() #1835
[arthas@107281]$
stack
使用场景
(1)查看方法外部调用过程,也可以说是某个方法的调用路径
# 跟踪handleTaskCompletion方法调用堆栈,到handleTaskCompletion方法为止
[arthas@107281]$ stack org.apache.spark.scheduler.DAGScheduler handleTaskCompletion
Press Q or Ctrl+C to abort.
Affect(class count: 1 , method count: 1) cost in 284 ms, listenerId: 6
ts=2025-08-12 22:09:32;thread_name=dag-scheduler-event-loop;id=75;is_daemon=true;priority=5;TCCL=org.apache.spark.util.MutableURLClassLoader@2b97cc1f
@org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion()
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2979)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2924)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2913)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
ognl
使用场景
(1)调用类的静态方法
# 调用某个类的静态方法
ognl '@org.a.b.c.Abc@getClusterHosts()'
logger
日志相关操作
使用场景
(1)修改某个logger的日志级别
# 将kafka.log.LogManager日志级别修改为debug
logger --name kafka.log.LogManager --level debug
# 将ROOT logger的日志级别修改为warn
logger --name ROOT --level debug
vmtool
对JVM虚拟机进行操作
使用场景
(1)查看某个类的实例列表
# 查看kafka.log.LogCleanerManager实例列表,最多十个
vmtool --action getInstances --className kafka.log.LogCleanerManager --limit 10
(2)查看某个类实例的属性
# 查看kafka.log.LogCleanerManager实例列表中第一个实例的inProgress属性,输出最多2层嵌套
vmtool --action getInstances --className kafka.log.LogCleanerManager --express 'instances[0].inProgress' -x 2
jad
jad 命令将 JVM 中实际运行的 class 的 byte code 反编译成 java 代码,便于你理解业务逻辑
使用场景
(1)反编译指定已加载类的源码
# 将org.a.b.c.Abc字节码反编译
jad org.a.b.c.Abc