All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, How can I enable mouse hover feature on column chart to show its data in Dashboard Studio? I have been searching for an answer but haven't found anything work. Many thanks,       ind... See more...
Hello, How can I enable mouse hover feature on column chart to show its data in Dashboard Studio? I have been searching for an answer but haven't found anything work. Many thanks,       index=web AND uri_path!="*.nsf*" AND uri_path!="*:443" | timechart span=1d dc(src_ip) by src_ip limit=0      
Hi, Now I want to forward data from HF to a single instance, can you tell me your steps? Thank you very much.
Thanks all for the reply. Got an update from AppDynamics Support as well that currently the tool doesn't have the requested functionality of license reporting. So suggested to put on Idea exchange. ... See more...
Thanks all for the reply. Got an update from AppDynamics Support as well that currently the tool doesn't have the requested functionality of license reporting. So suggested to put on Idea exchange. Again thanks all for your valuable replies.
I am looking forward to an answer too. Anyone can give an idea?
I want to make box-plot graph using my data. I try to find a solution, but it need to install app from file at splunk. So it couldn't apply at my Apps. (Because "my Apps" and "install app" is diffe... See more...
I want to make box-plot graph using my data. I try to find a solution, but it need to install app from file at splunk. So it couldn't apply at my Apps. (Because "my Apps" and "install app" is different) Is there any way to draw box-plot at my own Apps not using "install app form file" ?
We are trying to ingest large (peta bytes) information into Splunk.  The Events are in JSON file structure like - 'audit_events_ip-10-23-186-200_1.1512077259453.json' The pipeline is like -  JSON ... See more...
We are trying to ingest large (peta bytes) information into Splunk.  The Events are in JSON file structure like - 'audit_events_ip-10-23-186-200_1.1512077259453.json' The pipeline is like -  JSON files > Folder > UF > HF Cluster > Indexer Cluster   ~ UF - inputs.conf [batch:///folder] _TCP_ROUTING = p2s_au_hf crcSalt = <SOURCE> disabled = false move_policy = sinkhole recursive = false whitelist = \.json$   We are seeing the events from specific files (NOT all) are getting duplicated. It indexes from some file 2 times exactly.  As it is [batch:///] which suppose to delete the file after reading it & crcSalt=<SOURCE>, we are NOT able to figure out why & what creates the duplicates.  Would appreciate any help, reference or pointers. Thanks in advance!!!
yea unfortunately mvexpand can be memory intensive.  I would say limit your fieldset as much as possible before using it and see if that helps. It actually may work to just do a,   <base_... See more...
yea unfortunately mvexpand can be memory intensive.  I would say limit your fieldset as much as possible before using it and see if that helps. It actually may work to just do a,   <base_search> | stats count by "records{}.properties.flows{}.flows{}.flowTuples{}" | eval time=mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 0), src_ip=mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 1), dst_ip=mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 2), src_port=mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 3), dst_port=mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 4), protocol=mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 5), traffic_flow=mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 6), traffic_result=mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 7) | stats sum(count) as total by src_ip, dst_ip   this should tally up all the individual flow_tuples from events and then we can eval to split it out and then sum it all up by src, dest IP. I think this get around the need for an MVexpand. Let me know if that works!
Thanks @dtburrows3  This method worked perfectly. Able to extract the required fields while still keeping associations intact.  although running this at scale, I am getting the following message.  ... See more...
Thanks @dtburrows3  This method worked perfectly. Able to extract the required fields while still keeping associations intact.  although running this at scale, I am getting the following message.   command.mvexpand: output will be truncated at 2200 results due to excessive memory usage. Memory threshold of 500MB as configured in limits.conf / [mvexpand] / max_mem_usage_mb has been reached. Are there any alternatives to mvexpand that would avoid these memory issues? 
Hello @dtburrows3, Thank you so much for your quick response, truly appreciate it. I am getting this:    
First glance it looks like this the regex you provided is looking for double quote to help find the pattern and the sample event don't seem to have any. I think you may have better luck using a re... See more...
First glance it looks like this the regex you provided is looking for double quote to help find the pattern and the sample event don't seem to have any. I think you may have better luck using a regex closer to REGEX = \s+([^:]+?):\s+([^,]+) I havent tested locally but on regex 101 it looks like it matches pretty well  
Hello, I have issues getting expected field value pairs using following props and transforms configuration files. Sample events and my configuration files are given below. Any recommendation will be... See more...
Hello, I have issues getting expected field value pairs using following props and transforms configuration files. Sample events and my configuration files are given below. Any recommendation will be highly appreciated.   My Configuration Files [mypropsfile] REPORT-mytranforms=myTransfile [myTransfile] REGEX = ([^"]+?):\s+([^"]+?) FORMAT = $1::$2   Sample Events 2023-11-15T18:56:30.098Z, User ID: 90A, User Type: TempEMP,  Product Code:  pc, UAT:  UTA-True, Event Type:  TEST,  EventID:  Lookup, Remote Host: 25.191.157.244 2023-11-15T18:56:29.098Z, User ID: 90A, Host:  vx2tbax.dev, User Type: TempEMP,  Product Code:  pc, UAT:  UTA-True, Event Type:  TEST,  EventID:  Lookup, Remote Host: 25.191.157.244 2023-11-15T18:56:28.098Z, User ID: 91B, User Type:  TempEMP,  Product Code:  pc, UAT:  UTA-True, Event Type:  TEST,  EventID:  Lookup, Remote Host: 25.191.157.244 2023-11-15T18:56:27.098Z, User ID: 91B, User Type:  TempEMP,  Product Code:  pc, UAT:  UTA-True, Event Type:  TEST,  EventID:  Lookup, Remote Host: 25.191.157.244 2023-11-15T18:56:27.001Z, User ID: 91B, User Type:  TempEMP,  Host:  vx2tbax.dev, Product Code:  pc, UAT:  UTA-True, Event Type:  TEST,  EventID:  Lookup, Remote Host: 25.191.157.244  
To retain the associations for any sort of analysis you may need to mvexpand the "records{}.properties.flows{}.flows{}.flowTuples{}" field itself. stats aggregation using 2 multivalued fields as b... See more...
To retain the associations for any sort of analysis you may need to mvexpand the "records{}.properties.flows{}.flows{}.flowTuples{}" field itself. stats aggregation using 2 multivalued fields as by-fields can be misleading for the final output. Below is a table of the event you shared on the initial post after using the mvexpand and then extracting out the individual fields after. SPL to do this          | mvexpand "records{}.properties.flows{}.flows{}.flowTuples{}" | eval time=mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 0), src_ip=mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 1), dest_ip=mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 2), src_port=mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 3), dest_port=mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 4), protocol=mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 5), traffic_flow=mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 6), traffic_result=mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 7)     Doing a stats count by src_ip and dst_ip should make more sense using the data formatted in this way.    
@dtburrows3  Thank you for the reply.  Tried these eval and the fields are getting extracted from the tuples, but it seems the association between them is lost.  For this one event, there are tota... See more...
@dtburrows3  Thank you for the reply.  Tried these eval and the fields are getting extracted from the tuples, but it seems the association between them is lost.  For this one event, there are total 17 tuples. But after applying evals, resulting stats shows several other combinations between src_ip & dst_ip. Stats for field records{}.properties.flows{}.flows{}.flowTuples{} stats on src_ip,dst_ip after applying eval
Hi @gcusello  I am able to get different  status code in a pie chart ,if i also want to append an another query count to get the "totalrequest" ....its not adding to pie chart How can i add below i... See more...
Hi @gcusello  I am able to get different  status code in a pie chart ,if i also want to append an another query count to get the "totalrequest" ....its not adding to pie chart How can i add below in pie chart...lets say the total request count say 3 success 200-2(green color) 400 error-1(pink color) 500 error-1(red color) index="1**" source="2***" | rex "(?ms)statusCode: (?<status_code>\d+)" | stats count by statusCode | appendcols [search index="1**" source="2**" "republish event"| stats count by event.body | stats count | rename count as totalrequest]
Hi @splunkerhtml, may i know, after creating the token, did you do copy-paste ?!?! after pasting, maybe, thee is a chance that, you included a space and entered ?!?! (many times many of my friends f... See more...
Hi @splunkerhtml, may i know, after creating the token, did you do copy-paste ?!?! after pasting, maybe, thee is a chance that, you included a space and entered ?!?! (many times many of my friends faced this issue!) just double check the token created and copy pasted, then update us, thanks.  or, is this a production project?.. then you may contact Splunk Cloud Support. they should be able to help you.  Upvotes / karma points are appreciated by everybody, thanks. 
You need to illustrate actual data (column format or raw, in text, anonymize as needed). Then, explain which command in your search "deducts" (I assume it means to remove) said events?  I don't see a... See more...
You need to illustrate actual data (column format or raw, in text, anonymize as needed). Then, explain which command in your search "deducts" (I assume it means to remove) said events?  I don't see any logic to eliminate "user.lifecycle.delete.completed".  Also, how does this string relate to data fields?
What's the query and data that this comes from?
Your `indextime` is a macro and it's expansion does not work with the >=$info_min_time$ Other points: The documentation says that map does not work after appendpipe or append commands - see Known l... See more...
Your `indextime` is a macro and it's expansion does not work with the >=$info_min_time$ Other points: The documentation says that map does not work after appendpipe or append commands - see Known limitations https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/map Your use of appendpipe in this example is odd in that it does nothing - I assume this is from some more complete search This search is probably NOT the way you want to do what you are trying to do - given your maxsearch=20000, this may take forever to run if you really have that many searches to run sequentially. Perhaps you can say what you're trying to achieve as map seems as though it may not be the solution for your scenario.  
Thank you dtburrows3. I thought the same thing but didn't know how to find what was being loaded on the Cloud. I don't know of a btool option in the cloud for a custom app. Our developer created this... See more...
Thank you dtburrows3. I thought the same thing but didn't know how to find what was being loaded on the Cloud. I don't know of a btool option in the cloud for a custom app. Our developer created this custom app with everything in the default folder, so local wasn't a path we were deploying. I finally realized that we possibly created a local folder ourselves in the GUI when someone went into the Manage Apps, view objects and edited the XML. I have modified it there manually to resolve the problem, but I want to delete the local view completely, and it didn't get removed when I uploaded a new release of the custom app to the Splunk Cloud.  Does anyone know how to delete a file from an app in Splunk Cloud?
I am attempting to ingest an XML file but am getting stuck can someone please help. The data will ingest if I remove "BREAK_ONLY_BEFORE =\<item\>"  but with a new event per item.   this is the XML ... See more...
I am attempting to ingest an XML file but am getting stuck can someone please help. The data will ingest if I remove "BREAK_ONLY_BEFORE =\<item\>"  but with a new event per item.   this is the XML and code I have tried   <?xml version="1.0" standalone="yes"?> <DocumentElement> <item> <hierarchy>ASA</hierarchy> <hostname>AComputer</hostname> <lastscandate>2023-12-17T11:08:21+11:00</lastscandate> <manufacturer>VMware, Inc.</manufacturer> <model>VMware7,1</model> <operatingsystem>Microsoft Windows 10 Enterprise</operatingsystem> <ipaddress>168.132.11.200</ipaddress> <vendor /> <lastloggedonuser>JohnSmith</lastloggedonuser> <totalcost>0.00</totalcost> </item> <item> <hierarchy>ASA</hierarchy> <hostname>AComputer</hostname> <lastscandate>2023-12-17T12:20:21+11:00</lastscandate> <manufacturer>Hewlett-Packard</manufacturer> <model>HP Compaq Elite 8300 SFF</model> <operatingsystem>Microsoft Windows 8.1 Enterprise</operatingsystem> <ipaddress>168.132.136.160</ipaddress> <vendor /> <lastloggedonuser>JohnSmith</lastloggedonuser> <totalcost>0.00</totalcost> </item> <item> <hierarchy>ASA</hierarchy> <hostname>AComputer</hostname> <lastscandate>2023-12-17T11:54:28+11:00</lastscandate> <manufacturer>HP</manufacturer> <model>HP EliteBook 850 G5</model> <operatingsystem>Microsoft Windows 10 Enterprise</operatingsystem> <ipaddress>168.132.219.32, 192.168.1.221</ipaddress> <vendor /> <lastloggedonuser>JohnSmith</lastloggedonuser> <totalcost>0.00</totalcost> </item> <item> <hierarchy>ASA</hierarchy> <hostname>AComputer</hostname> <lastscandate>2023-12-17T11:50:20+11:00</lastscandate> <manufacturer>VMware, Inc.</manufacturer> <model>VMware7,1</model> <operatingsystem>Microsoft Windows 10 Enterprise</operatingsystem> <ipaddress>168.132.11.251</ipaddress> <vendor /> <lastloggedonuser>JohnSmith</lastloggedonuser> <totalcost>0.00</totalcost> </item>   Inputs.conf [monitor://D:\SplunkImportData\SNOW\*.xml] sourcetype=snow:all:devices index=asgmonitoring disabled = 0   Props.conf [snow:all:devices] KV_MODE=xml BREAK_ONLY_BEFORE =\<item\> SHOULD_LINEMERGE = false DATETIME_CONFIG = NONE