All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I'm new to Appdynamics and I'm having error messages coming up regarding the proxy when using a python agent, I would appreciate any help to resolve this issue.  If I follow the steps on th... See more...
Hi, I'm new to Appdynamics and I'm having error messages coming up regarding the proxy when using a python agent, I would appreciate any help to resolve this issue.  If I follow the steps on the documentation and just create a /etc/appdynamics.cfg file : [agent] app = Test App tier = Test Tier node = node 0adb [controller] host = XXXX.saas.appdynamics.com port = 443 ssl = (on) account = XXX accesskey = XXX I get this error: 19:21:00,008  WARN [AD Thread-Metric Reporter0] MetricHandler - Metric Reporter Queue full. Dropping metrics. 19:21:20,171  INFO [AD Thread Pool-Global1] ControllerTimeSkewHandler - Controller Time Skew Handler Run Aborted - Skew Check is Disabled 19:21:23,040  INFO [AD Thread Pool-Global1] ConfigurationChannel - Detected node meta info: [Name:ProcessID, Value:4416, Name:appdynamics.ip.addresses, Value:fe80:0:0:0:d453:dc4:33ad:b620%enp0s3,192.168.1.15] 19:21:23,040  INFO [AD Thread Pool-Global1] ConfigurationChannel - Sending Registration request with: Application Name [Test App], Tier Name [Test Tier], Node Name [node 0adb], Host Name [ChrisUbuntu-VM] Node Unique Local ID [node 0adb], Version [Python Agent v20.3.0.0 (proxy v4.5.16.28134, agent-api v4.3.5.0)] 19:21:23,559 ERROR [AD Thread Pool-Global1] ConfigurationChannel - Fatal transport error while connecting to URL [/controller/instance/0/applicationConfiguration]: org.apache.http.NoHttpResponseException: XXXX.saas.appdynamics.com:443 failed to respond 19:21:23,559  WARN [AD Thread Pool-Global1] ConfigurationChannel - Could not connect to the controller/invalid response from controller, cannot get initialization information, controller host [XXX.saas.appdynamics.com], port[443], exception [Fatal transport error while connecting to URL [/controller/instance/0/applicationConfiguration]] 19:22:00,008  WARN [AD Thread-Metric Reporter0] MetricHandler - Metric Reporter Queue full. Dropping metrics. 19:22:19,066  WARN [AD Thread Pool-Global1] EventGenerationService - The retention queue is at full capacity [5]. Dropping events for timeslice [Sun Apr 05 19:17:00 AEST 2020] to accomodate events for timeslice [Sun Apr 05 19:22:00 AEST 2020] If I edit the controller-info.xml in /usr/local/lib/python3.6/dist-packages/appdynamics_bindeps/proxy/conf/controller-info.xml With: <controller-host>EDITED.saas.appdynamics.com</controller-host> <controller-port>443</controller-port> <application-name>Test App</application-name> <tier-name>Test Tier</tier-name> <node-name>node 0adb</node-name>  <account-name>EDITED</account-name> <account-access-key>EDITED</account-access-key>    <use-ssl-client-auth>true</use-ssl-client-auth> Then I get these errors: (log file /tmp/appd/logs/proxyCore.2020_04_05__20_12_05.0.log ) [Thread-1] 05 Apr 2020 20:12:41,174  INFO com.singularity.proxyControl.ProxyMultiNodeManager - Creating new node in proxy for node appName:Test App tierName:Test Tier nodeName:node 0adb [Thread-1] 05 Apr 2020 20:12:41,175  INFO com.singularity.proxyControl.ProxyMultiNodeManager - Comm address for the start node request: [app.name=Test App,tier.name=Test Tier,node.name=node 0adb,controller.host.name=XXXXX.saas.appdynamics.com,account.name=XXXX,account.key=XXXX,controller.port=443] is: /tmp/appd/run/comm/proxy-7826785417410579370/n15 [AD Thread Pool-ProxyControlReq0] 05 Apr 2020 20:12:41,379  INFO com.singularity.proxyControl.ProxyMultiNodeManager - Removed lock for start node request for key [app.name=Test App1,tier.name=Test Tier1,node.name=node 0adb,controller.host.name=XXX.saas.appdynamics.com,account.name=XXX,account.key=01sbc6q69823,controller.port=443] [AD Thread Pool-ProxyControlReq0] 05 Apr 2020 20:12:41,381  INFO com.singularity.proxyControl.ProxyMultiNodeManager - Proxy for node [app.name=Test App,tier.name=Test Tier,node.name=node 0adb,controller.host.name=XXXX.saas.appdynamics.com,account.name=XXX,account.key=XXXX,controller.port=443] has been started [Thread-1] 05 Apr 2020 20:12:42,396 ERROR com.singularity.proxyControl.ProxyMultiNodeManager - Error while starting a new proxy node java.lang.NullPointerException         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)         at java.lang.reflect.Method.invoke(Method.java:498)         at com.appdynamics.ee.agent.proxy.bootstrap.multiagent.ProxyMultiNodeManager.startProxy(ProxyMultiNodeManager.java:190)         at com.appdynamics.ee.agent.proxy.bootstrap.ZeroMQControlServer$ReqHandler.run(ZeroMQControlServer.java:312)         at java.lang.Thread.run(Thread.java:748) [Thread-1] 05 Apr 2020 20:12:42,397 ERROR com.singularity.proxyControl.ProxyNode - Error while shutting down proxy node java.lang.NullPointerException         at com.appdynamics.ee.agent.proxy.bootstrap.multiagent.ProxyNode.shutdown(ProxyNode.java:152)         at com.appdynamics.ee.agent.proxy.bootstrap.multiagent.ProxyMultiNodeManager.stopProxyNode(ProxyMultiNodeManager.java:222)         at com.appdynamics.ee.agent.proxy.bootstrap.multiagent.ProxyMultiNodeManager.startProxy(ProxyMultiNodeManager.java:207)         at com.appdynamics.ee.agent.proxy.bootstrap.ZeroMQControlServer$ReqHandler.run(ZeroMQControlServer.java:312)         at java.lang.Thread.run(Thread.java:748)
Basically what I need to do is compare a user's authentication request to their most recent session start request and alert when they are different. Example - example@email.com sessions started wi... See more...
Basically what I need to do is compare a user's authentication request to their most recent session start request and alert when they are different. Example - example@email.com sessions started with IP 12.145.123. example@email.com authentication request IP 12.145.123 example@email.com authentication request with IP 58.145.12.125 I would want to see that authentication request as an alert.
Hi Guys, I have built the Authentication datamodel on the Splunk ES. However I am dealing with a dilemma of duplicate events here. To explain it in detail: If an authentication attempt occur... See more...
Hi Guys, I have built the Authentication datamodel on the Splunk ES. However I am dealing with a dilemma of duplicate events here. To explain it in detail: If an authentication attempt occurs from src X to dest Y, same event is generated on X, Y and Domain Controller A. I am collecting logs from all the three machines and adding the same into the datamodel. So, when I use tstats count against the datamodel, I see 3 events depicting 3 attempts instead of one. The only way I see out of removing this duplicate is by adding src or host as an additional separator. If so, I won't be able to monitor the criteria of login failures from multiple sources. So, I was just wanting to know how you guys are tackling it. Thanks in advance.
In Indexing phase, once data is written to disk, it cannot be changed, I think the answer is YES. Kindly explain more in a line or two
Hi @gcusello hope you are doing good, As far as I understand, m@d means, beginning of the day, and -45m@d means, 45 minutes before the beginning of the day. Kindly correct me, I am always confused ... See more...
Hi @gcusello hope you are doing good, As far as I understand, m@d means, beginning of the day, and -45m@d means, 45 minutes before the beginning of the day. Kindly correct me, I am always confused with this
Hello, We are planning to create a dashboard for Redwood scheduler tool where we want to see Service availability, Up time, jobs monitoring, HA etc. Please let me know if there is any sample dashbo... See more...
Hello, We are planning to create a dashboard for Redwood scheduler tool where we want to see Service availability, Up time, jobs monitoring, HA etc. Please let me know if there is any sample dashboard available in Splunk (or link to any example dashboard ) for a scheduling and monitoring tool which we can take as a base and customize. Need some guidance on what we can show and how we can show. Thanks and any suggestion is much appreciated. Regards
Hi all, I have an issue with a new SQS Based S3 input. There is a large amount of historical data in S3 that I don't want to ingest. Is there a way to manipulate the pointer used by Splunk whe... See more...
Hi all, I have an issue with a new SQS Based S3 input. There is a large amount of historical data in S3 that I don't want to ingest. Is there a way to manipulate the pointer used by Splunk when reading S3 so it starts ingesting data from a nominated time, for example today onwards and thereby ignore historical data? Many thanks, David
Hello, I'm trying to build KPI for objects from MS SCOM. I'm using standard scom add-on to get information about objects and events from SCOM. I have created one field "kpi_value" which gives me cu... See more...
Hello, I'm trying to build KPI for objects from MS SCOM. I'm using standard scom add-on to get information about objects and events from SCOM. I have created one field "kpi_value" which gives me current status of certain object. Directly from the search all work ok but ITSI cannot see these values. I have some suspicions: I'm checking this field every 5 minutes but in direct search i see that if there is no incidents i have updates each 4 hours. I have option "Fill gaps in data with the last available value of data." but looks like this option cannot help here.
Unable to initialize modular input "web_input" defined in the app "website_input": Introspecting scheme=web_input: script running failed (exited with code 1)..
Hi I have data with captures events (phone calls). I would like to graph in hourly buckets the total calls per hour multiplied by the avg call duration. I have fumbled around and been able to pre... See more...
Hi I have data with captures events (phone calls). I would like to graph in hourly buckets the total calls per hour multiplied by the avg call duration. I have fumbled around and been able to present the two components in separate charts but failed when trying to merge both charts into one with the multiplication performed. First Search index=mydata sourcetype=cer | bin _time span=1h | eval date_hour=strftime(_time, "%H") | stats count AS B_Number first(date_hour) AS date_hour BY _time | stats avg(B_Number) BY date_hour Second Search. index=mydata sourcetype=cer | bin _time span=1h | eval date_hour=strftime(_time, "%H") | eval Callhours=CallDurationSecs/3600 | stats avg(Callhours) BY date_hour How do I combine these searches with the muliplication applied. Eg Search 1 Hour 1 result equals .00112299 Search 1 Hour 1 result equals 35555 Desired result for hour 1 399.21 (.00112299*35555)
Hello, This is Urgent. I am trying to connect to MSQL DB to ingest the data and build my Splunk index. When I am trying to create new connection I am getting this error: "There was an er... See more...
Hello, This is Urgent. I am trying to connect to MSQL DB to ingest the data and build my Splunk index. When I am trying to create new connection I am getting this error: "There was an error processing your request. It has been logged (ID XXXXXXXXX" Error details from Logs: 2020-04-04 23:41:02.823 +0300 [dw-59 - GET /api/connections/SysAdminSQLDBConnect/status] ERROR io.dropwizard.jersey.errors.LoggingExceptionMapper - Error handling a request: 33f91cc929c63cfd java.lang.NullPointerException: null at com.splunk.dbx.connector.logger.AuditLogger.replace(AuditLogger.java:50) at com.splunk.dbx.connector.logger.AuditLogger.error(AuditLogger.java:44) at com.splunk.dbx.server.api.service.database.impl.DatabaseMetadataServiceImpl.getStatus(DatabaseMetadataServiceImpl.java:159) at com.splunk.dbx.server.api.service.database.impl.DatabaseMetadataServiceImpl.getConnectionStatus(DatabaseMetadataServiceImpl.java:102) at com.splunk.dbx.server.api.resource.ConnectionResource.getConnectionStatus(ConnectionResource.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:219) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:253) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) at org.glassfish.jersey.internal.Errors.process(Errors.java:292) at org.glassfish.jersey.internal.Errors.process(Errors.java:274) at org.glassfish.jersey.internal.Errors.process(Errors.java:244) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680) at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:392) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:365) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:318) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205) at io.dropwizard.jetty.NonblockingServletHolder.handle(NonblockingServletHolder.java:50) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1617) at io.dropwizard.servlets.ThreadNameFilter.doFilter(ThreadNameFilter.java:35) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at io.dropwizard.jersey.filter.AllowedMethodsFilter.handle(AllowedMethodsFilter.java:47) at io.dropwizard.jersey.filter.AllowedMethodsFilter.doFilter(AllowedMethodsFilter.java:41) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at com.splunk.dbx.server.api.filter.ResponseHeaderFilter.doFilter(ResponseHeaderFilter.java:30) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1297) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1212) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at com.codahale.metrics.jetty9.InstrumentedHandler.handle(InstrumentedHandler.java:249) at io.dropwizard.jetty.RoutingHandler.handle(RoutingHandler.java:52) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:767) at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:54) at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:500) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:270) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:388) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938) at java.lang.Thread.run(Unknown Source) BTW, Splunk service using admin identity with DB access Privilege Systems versions: Windows Server 2016 Splunk version: 8.0.3 MS SQL 2017 Database version 14.0.3281.6 JRE Version 8 (1.8.0_241) JDBC Driver Version 8.2.2 DB Connect App 3.3.0
Will there be an updated app for vormetric so it will work with Splunk 8? the issue is the advanced xml.
Hi, I'm trying to get the Splunk App for Web Analytics to work and am having trouble with the Web data model acceleration. Currently when viewing the Analytics Center and Audience tabs I get 'No r... See more...
Hi, I'm trying to get the Splunk App for Web Analytics to work and am having trouble with the Web data model acceleration. Currently when viewing the Analytics Center and Audience tabs I get 'No results found'. After some investigating this is because the Web.eventtype=pageview part of the SPL query that runs is not returning any results (when I examine the search and remove this part results are returned). When I look at the events in the datamodel via Pivot there are no events where this is true, in fact the Web.eventtype field is empty for all events in the data model. Note: this is when the data model is accelerated. When I turn off acceleration I can see that the field 'eventtype' in the data model is created by an auto-extracted field. I can then go to the edit page of the eventtype field and preview the output of the extraction, then the eventtype field is populated as expected. I can check this through a Pivot again and the values for the eventtype field values are still present. I can't work out why the field values disappear when enabling data model acceleration. I'm only working on a small data set currently for testing (<10Mb) and have given 2+ hours for the data model acceleration to be built. I have also generated the lookup tables required by the App a number of time, so that is not the issue. I am using v2.2.2 of the App and Splunk v8.0.2. I believe I have the App set up correctly as the Real Time tab is showing data. I also breifly was able to view data on the Analytics Center and Audience tab yesterday but can't work out what changes caused this to occur! Edit: When I runthe Data Model Audit I get the following errors on the Top Accelerations visualizations: [map]: Failed to fetch REST endpoint uri=https://127.0.0.1:8089/servicesNS/nobody/SplunkAppForWebAnalytics/admin/summarization/tstats:DM_SplunkAppForWebAnalytics_myWeb?count=0 from server https://127.0.0.1:8089. Check that the URI path provided exists in the REST API.
I installed the Splunk CIM and TA app with the goal being to upload .log files from FGTA devices. I have several from webfilter, evpn, and traffic. None of the default fgt_* sourcetypes extract fie... See more...
I installed the Splunk CIM and TA app with the goal being to upload .log files from FGTA devices. I have several from webfilter, evpn, and traffic. None of the default fgt_* sourcetypes extract fields properly. All I get is time extraction and "Event". How can I correct this?
I have a csv with just 2 columns Time & memory. the events look like this, so this is basically a csv extract of a server memory utilization for April 3rd from 12:00 AM - 11:30 PM at an interval of ... See more...
I have a csv with just 2 columns Time & memory. the events look like this, so this is basically a csv extract of a server memory utilization for April 3rd from 12:00 AM - 11:30 PM at an interval of 10 mins. Time Event 4/3/20 4/3/2020 23:34,98% 11:34:00.000 PM When i run a very simple query - index="memory"|timechart count The statistics tab looks ok however for some reason the visulaization tab is pushed back and starts from April 2nd Of course i thought it to be an issue with the time modifiers and tried tinkering like this index="memory" |rex field=_raw "(?.*?)\,"|eval time=strptime(time,"%m/%d/%Y %H:%M")|eval _time=time |timechart count In the rex for 'time' I am extracting it from the event(_raw) and NOT the first CSV columb 'Time'. BUT the output remains the same, namely the issue is the statistics tab looks absolutely correct but the viz tab gets pushed back . Any clues?
Hi - I am running a query to index my data into Splunk using a rising column. However, whenever any columns has blank value, that particular event is not indexing into Splunk. Example: I have bel... See more...
Hi - I am running a query to index my data into Splunk using a rising column. However, whenever any columns has blank value, that particular event is not indexing into Splunk. Example: I have below data and using rising column as ORDER_DATE NAME LOCATION MOBILE VENDOR ORDER_DATE Mary Texas 1234 AMAZON Feb202020 Khan Miami 4567 DD Mar042020 Kevin NewYork FEDEX Feb142020 Peter 6789 AMAZON Mar122020 Here DBConnect not indexing the data of Kevin and Peter and not able to see any data in Splunk. Kevin - Mobile number is blank value and Peter - Location is blank value Please help why it is skipping to index this data
I have a log that contains numerical value which is logged irregularly: I would like to calculate (and show on timechart) total continuous duration when value is equal or above static thresho... See more...
I have a log that contains numerical value which is logged irregularly: I would like to calculate (and show on timechart) total continuous duration when value is equal or above static threshold. But I'd like to reset this calculation when it goes below that threshold. For threshold == 45 it should like this: How can I get this? Thanks for your help.
I'm getting the following error when trying to poll an ESKM keymgmt server using snmpV3. ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" wrongDigest snmp_stanz... See more...
I'm getting the following error when trying to poll an ESKM keymgmt server using snmpV3. ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" wrongDigest snmp_stanza:snmp://Encryption_Keymgmt_Devices any ideas where the problem may be coming from?
I am new to splunk, can i get advice on moving splunk universal forwarder from one db host to another. I am looking for options where i don't have to configure whole lot on the target. I tried S... See more...
I am new to splunk, can i get advice on moving splunk universal forwarder from one db host to another. I am looking for options where i don't have to configure whole lot on the target. I tried SPLUNK HOME tar ball to target server, change the hostname in .conf files. I see the data coming but i don't see the sourcetype options , tags, dashboards Thanks
Hi Everyone, I have a query that produces table 1 below. | from inputlookup:"incident.csv" | where caused_by >= " " | stats count values(caused_by) by assignment_group I'd like to make the... See more...
Hi Everyone, I have a query that produces table 1 below. | from inputlookup:"incident.csv" | where caused_by >= " " | stats count values(caused_by) by assignment_group I'd like to make the table more useful by reflecting it as table 2 and 3. I tried : | from inputlookup:"incident.csv" | where caused_by >= " " | stats count values(caused_by) by assignment_group | mvexpand values(caused_by) but it did not give the desired results. Any help would be appreciated.