All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All I have a nested JSON in my log event. On that basis, I have to create a dynamic table. {status: FINISHED    data: [       {         duration: 123        status: A      }      {        ... See more...
Hi All I have a nested JSON in my log event. On that basis, I have to create a dynamic table. {status: FINISHED    data: [       {         duration: 123        status: A      }      {         duration: 456        status: B      }      {         duration: 678        status:C      }    ]} I need to create the table for this nested one Table Structure :  status A B C Finished 123 456  678 Also, I have one more req. If in the future we get more values in the sub-part of JSON then can we add a column for that also
Is there a way to create a daily report for the amount of times when a particular playbook is ran?
Hi Splunkers ,  Query :- I have a use case in which we need to display the splunk dashboard on some screen for 8+ hours constantly  Splunk authentication method :- we are using SSO authentication... See more...
Hi Splunkers ,  Query :- I have a use case in which we need to display the splunk dashboard on some screen for 8+ hours constantly  Splunk authentication method :- we are using SSO authentication for login Splunk architecture :- 3 indexer one search head one deployment server and one cluster master  Solution Tried :- used refresh tag in dashboard, tried creating a new user which logins without SSO  Note :- we can't use splunk TV in this usecase  Any help would be appreciated 
how can i get date data in fields to use the splunk features. What is the best way for it. The raw data are coming from a kafka stream: e.g.: MAX_CURRENT BLOCK_POSITION_NUMBER OPERATING_D... See more...
how can i get date data in fields to use the splunk features. What is the best way for it. The raw data are coming from a kafka stream: e.g.: MAX_CURRENT BLOCK_POSITION_NUMBER OPERATING_DISTANCE_KM row data: {"body":"{\"namespace\":\"xxx.messages.sensor.sensortelemetrymessage\",\"payload\":{\"dataSource\":{\"endpointUrl\":\"opc.tcp://123.164.72.6:4840/\",\"id\":\"\",\"name\":\"\",\"route\":\"\"},\"data\":{\"key\":\"ns=3;s=\\\"TEST_CARRIER_DB\\\".\\\"CARRIER\\\"[105]\",\"value\":[{\"key\":\"TIMESTAMP\",\"value\":\"0001-01-01T00:00:00Z\",\"dataType\":\"DateTime\"},{\"key\":\"PLC_CARRIER\",\"value\":[{\"key\":\"OPERATION_MODE\",\"value\":\"0\",\"dataType\":\"Int16\"},{\"key\":\"TEMP_CABINET\",\"value\":\"0\",\"dataType\":\"Int16\"},{\"key\":\"F_PROG_SIG\",\"value\":[{\"key\":\"0\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"1\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"2\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"3\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"4\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"5\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"6\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"7\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"8\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"9\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"10\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"11\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"12\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"13\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"14\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"15\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"16\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"17\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"18\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"19\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"20\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"21\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"22\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"23\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"24\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"25\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"26\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"27\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"28\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"29\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"30\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"31\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"32\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"33\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"34\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"35\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"36\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"37\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"38\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"39\",\"value\":\"32\",\"dataType\":\"Byte\"}],\"dataType\":\"ByteCollection\"},{\"key\":\"PROG_DAT\",\"value\":[{\"key\":\"0\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"1\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"2\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"3\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"4\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"5\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"6\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"7\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"8\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"9\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"10\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"11\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"12\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"13\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"14\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"15\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"16\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"17\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"18\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"19\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"20\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"21\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"22\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"23\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"24\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"25\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"26\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"27\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"28\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"29\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"30\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"31\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"32\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"33\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"34\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"35\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"36\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"37\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"38\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"39\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"40\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"41\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"42\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"43\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"44\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"45\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"46\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"47\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"48\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"49\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"50\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"51\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"52\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"53\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"54\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"55\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"56\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"57\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"58\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"59\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"60\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"61\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"62\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"63\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"64\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"65\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"66\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"67\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"68\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"69\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"70\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"71\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"72\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"73\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"74\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"75\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"76\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"77\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"78\",\"value\":\"32\",\"dataType\":\"Byte\"},{\"key\":\"79\",\"value\":\"32\",\"dataType\":\"Byte\"}],\"dataType\":\"ByteCollection\"},{\"key\":\"CYCLETIME_AVERAGE\",\"value\":\"0\",\"dataType\":\"Int16\"}],\"dataType\":\"Struct<TEST_PLC_CARRIER_UDT>\"},{\"key\":\"CARRIER\",\"value\":[{\"key\":\"BLOCK_POSITION_NUMBER\",\"value\":\"0\",\"dataType\":\"Int16\"},{\"key\":\"OPERATING_HOURS\",\"value\":\"0\",\"dataType\":\"Int32\"}],\"dataType\":\"Struct<TEST_CARRIER_UDT>\"},{\"key\":\"DRIVE\",\"value\":[{\"key\":\"BMK\",\"value\":\"0\",\"dataType\":\"Int16\"},{\"key\":\"POSITION\",\"value\":\"0\",\"dataType\":\"Int32\"},{\"key\":\"SPEED\",\"value\":\"0\",\"dataType\":\"Int16\"},{\"key\":\"ACT_CURRENT\",\"value\":\"0\",\"dataType\":\"Int16\"},{\"key\":\"MAX_CURRENT\",\"value\":\"0\",\"dataType\":\"Int16\"},{\"key\":\"MAX_CURRENT_AVERAGE\",\"value\":\"0\",\"dataType\":\"Int16\"},{\"key\":\"ERRORCODE\",\"value\":\"0\",\"dataType\":\"Int16\"},{\"key\":\"OPERATING_DISTANCE_MM\",\"value\":\"0\",\"dataType\":\"Int32\"},{\"key\":\"OPERATING_DISTANCE_KM\",\"value\":\"0\",\"dataType\":\"Int32\"}],\"dataType\":\"Struct<TEST_FREQU_CONV_UDT>\"},{\"key\":\"USER\",\"value\":[{\"key\":\"BARCODE_DRIVE\",\"value\":\"0\",\"dataType\":\"Int16\"},{\"key\":\"BARCODE_DRIVE_MIN\",\"value\":\"0\",\"dataType\":\"Int16\"},{\"key\":\"WIFI_SIGNAL_STRENGTH\",\"value\":\"0\",\"dataType\":\"Int16\"},{\"key\":\"WIFI_CHANNEL\",\"value\":\"0\",\"dataType\":\"UInt16\"},{\"key\":\"TX_RATE\",\"value\":\"0\",\"dataType\":\"UInt16\"},{\"key\":\"SC_RESPONSE_TIME\",\"value\":\"0\",\"dataType\":\"UInt32\"}],\"dataType\":\"Struct<TEST_CARRIER_MAX_UDT_USER>\"}],\"status\":\"Good\",\"lastChangeTimestamp\":\"2022-08-14T06:04:02.9607079Z\",\"measurementTimestamp\":\"2022-08-14T06:04:02.9607079Z\",\"dataType\":\"Struct<TEST_CARRIER_MAX_UDT>\"}},\"id\":\"c9e9009b-f79b-44c8-9a44-ebd5a59b0814\",\"$schema\":\"https://xxxxx.blob.core.windows.net/schemas/message_schemas/2021-05-05/xxx.sensor.sensortelemetrymessage.schema.json\",\"metadata\":{\"timestamp\":\"2022-08-14T06:04:22.2257279Z\",\"correlationIds\":[],\"senderIdentifier\":{\"id\":\"aaaaa-1493-4f61-993b-e3fb046908aa\",\"name\":\"OPC UA Connector\",\"type\":\"gateway\",\"route\":\"\"},\"destinationIdentifiers\":[]}}","SCHEMA_MAPPER":"xxxx.TELEMETRY","enqueuedTime":"2022-08-14T06:04:22.225Z","YEAR":2022,"MONTH":8,"DAY":14,"HOUR":6}  
We currently have the user case  - High Number of Login Failures from a single source turned on We would like to exclude some IP ranges from the search that we failover our staff to. Our Search t... See more...
We currently have the user case  - High Number of Login Failures from a single source turned on We would like to exclude some IP ranges from the search that we failover our staff to. Our Search terms at the moment is - index=appext_o365 `o365_management_activity` Operation=UserLoginFailed record_type=AzureActiveDirectoryStsLogon app=AzureActiveDirectory | stats count dc(user) as accounts_locked values(user) as user values(LogonError) as LogonError values(authentication_method) as authentication_method values(signature) as signature values(UserAgent) as UserAgent by src_ip record_type Operation app | search accounts_locked >= 10| `high_number_of_login_failures_from_a_single_source_filter` I added | search src_ip!="###.##.##.17" |  which does remove that one IP, from the search but obviously I dont want to manually put in 1 to 128. Any assistance would be very much appreciated
I am trying to get the kvstore status of one my heavy forwarder using the command     /opt/splunk/bin/splunk show kvstore-status This command [GET /services/kvstore/status] needs splunkd to be u... See more...
I am trying to get the kvstore status of one my heavy forwarder using the command     /opt/splunk/bin/splunk show kvstore-status This command [GET /services/kvstore/status] needs splunkd to be up, and splunkd is down.
i created python scripts using splunk sdk and when i used my laptop it worked fine but when i try to use in splunk windows machine i get an error DeprecationWarning: ResultsReader is a deprecated... See more...
i created python scripts using splunk sdk and when i used my laptop it worked fine but when i try to use in splunk windows machine i get an error DeprecationWarning: ResultsReader is a deprecated function. Use the JSONResultsReader function instead in conjuction with the 'output_mode' query param set to 'json' reader = results.ResultsReader(query_results)
I'm looking at downloading the "Cisco Secure eStreamer Client Add-On for Splunk" app, but when I do, instead of version 5.0.3 (suitable for Splunk 8.1 and 8.2) I get version 3.5.2 (suitable for Splun... See more...
I'm looking at downloading the "Cisco Secure eStreamer Client Add-On for Splunk" app, but when I do, instead of version 5.0.3 (suitable for Splunk 8.1 and 8.2) I get version 3.5.2 (suitable for Splunk 7.0, and 7.1). Is this Splunk or the uploaders issue?
Hi all, Is there a possibility that when you've made a query with the hits you want, that also the next x amounts of events are being listed? For example: index=*_*_windows EventCode=4688 sourc... See more...
Hi all, Is there a possibility that when you've made a query with the hits you want, that also the next x amounts of events are being listed? For example: index=*_*_windows EventCode=4688 source=XmlWinEventLog:Security *[redacted]* host=[redacted] *schtasks.exe | table _time, TargetUserName, host, CommandLine, status this will show exactly what I need to see, but I also want to know the next 10 events that occurred after the results of this query.  I hope this makes sense, if not clear don't hesitate to message me for clarification. Many thanks in advance!
I have a search whish results in these events:      user last_event user1 2021-12-30 08:57:36.77 user2 2022-03-12 22:29:52.333 user 3 2022-03-13 08:02:48.253   I want to plot a... See more...
I have a search whish results in these events:      user last_event user1 2021-12-30 08:57:36.77 user2 2022-03-12 22:29:52.333 user 3 2022-03-13 08:02:48.253   I want to plot a chart where on the X axis there's the dates and on the Y there's the user
Hello, I am trying to install latest version of Splunk universal forwarder using chef cookbook and getting error. Earlier in version 6.5.0, once rpm installed using rpm -ivh splunkuniversal-xx.rpm,... See more...
Hello, I am trying to install latest version of Splunk universal forwarder using chef cookbook and getting error. Earlier in version 6.5.0, once rpm installed using rpm -ivh splunkuniversal-xx.rpm, I used to run command as: /splunkuniversal/bin/splunk enable boot-start --accept-license --answer-yes and then change the password /splunkuniversal/bin/splunk edit user admin -password xxxxx -roles admin -auth admin:xxxxxx service splunk start but now in version 8.2.6, after rpm install, when I am trying to run above commands, it is asking to create user. Is there any change in installation process and how I can automate it again via chef cookbook so hat it should not prompt for user creation ? Regards, Dhimanv
Hi, I'm building a report to count the numbers of events per AWS accounts vs Regions with stats and xyseries. It works well but I would like to filter to have only the 5 rare regions (fewer events).... See more...
Hi, I'm building a report to count the numbers of events per AWS accounts vs Regions with stats and xyseries. It works well but I would like to filter to have only the 5 rare regions (fewer events). When I'm adding the rare, it just doesn’t work.   index=aws sourcetype="aws:cloudtrail" | rare limit=5 awsRegion  | stats count by awsRegion, account | xyseries account awsRegion count Any ideas? Thanks
Hi team,   May I ask a question about Splunk and ESXI compatibility, we are going to install latest Splunk Enterprise Ver 9.0.1 and Splunk Enterprise Security Ver 7.0.1, and install Splunk App. ... See more...
Hi team,   May I ask a question about Splunk and ESXI compatibility, we are going to install latest Splunk Enterprise Ver 9.0.1 and Splunk Enterprise Security Ver 7.0.1, and install Splunk App. we are going to install ESXI Version 7.0, the latest release is ESXI 7.0 UPdate 3f. Are ( Splunk Enterprise Ver 9.0.1 and Splunk Enterprise Security Ver 7.0.1, and Splunk App.) compatible with ESXI 7.0 Update 3f ?   I found https://docs.splunk.com/Documentation/AddOns/released/VMW/Hardwareandsoftwarerequirements and it says ESXI 7.0, no more information about Update. thanks.
Can anyone help me with extracting/parsing the multivalue fields  in sample event below using props and transforms conf. {\"ts\":1660880406.308522,\"uid\":\"CKFf5h2a9xFmkGFeFj\",\"id.orig_h\":\"10.... See more...
Can anyone help me with extracting/parsing the multivalue fields  in sample event below using props and transforms conf. {\"ts\":1660880406.308522,\"uid\":\"CKFf5h2a9xFmkGFeFj\",\"id.orig_h\":\"10.10.10.16\",\"id.orig_p\":64179,\"id.resp_h\":\"8.8.4.4\",\"id.resp_p\":53,\"proto\":\"udp\",\"trans_id\":50808,\"rtt\":0.12951111793518067,\"query\":\"discord.com\",\"qclass\":1,\"qclass_name\":\"C_INTERNET\",\"qtype\":1,\"qtype_name\":\"A\",\"rcode\":0,\"rcode_name\":\"NOERROR\",\"AA\":false,\"TC\":false,\"RD\":true,\"RA\":true,\"Z\":0,\"answers\":[\"162.159.135.232\",\"162.159.138.232\",\"162.159.137.232\",\"162.159.136.232\",\"162.159.128.233\"],\"TTLs\":[300.0,300.0,300.0,300.0,300.0]  
I would like to take a copy of my Production standalone Splunk instance and stand it up as a development machine. My Production machine is running on Linux and I'd like to move a copy to a new Linu... See more...
I would like to take a copy of my Production standalone Splunk instance and stand it up as a development machine. My Production machine is running on Linux and I'd like to move a copy to a new Linux server (different hostname, domain). Since i don't want to move the data stored in the indexes, I was wondering whether i can just copy the contents of the $SPLUNK_HOME/etc folder? or are there further files that need copying across (e.g kvstore settings)? ... or do i really need to copy the whole contents of $SPLUNK_HOME and then delete the index data from the development machine after the copy has finished?
on splunk cloud 8.2.2202.2 issuing the command as follows I get an error one times out of four -    | inputlookup append=t ethos_vulnaction_generic Last 30 minutes   Error in 'i... See more...
on splunk cloud 8.2.2202.2 issuing the command as follows I get an error one times out of four -    | inputlookup append=t ethos_vulnaction_generic Last 30 minutes   Error in 'inputlookup' command: External lookup table 'inputlookup' returned error code 0. Results might be incorrect. The search job has failed due to an error. You may be able view the job in the Job Inspector. | inputlookup append=t ethos_vulnaction_generic restarted splunk - no luck Not sure how to decipher job inspector - but this inconsistency - sometimes it work sometimes it doesn't is strange. kvstore was populated with json, and lookup; does have a filter in it - NOT asset_specific = "true" I tried removing the filter seeing if this impacted the results but I still get an error about one time in four..  if i do a rest query of the kvstore in json it looks  healthy to me... besides if I take this filter out I still get stability issues         "asset_specific": true, A cut down example of the json used to populate the record. I do refer explicitly to the field in the lookup as details.plugin_id which the lookup command seems to like... a snippet of json { "action_description": "zulu specific", "asset_specific": true, "details": { "plugin_id": [ "153989" ] } }
I have two separate logs ( Request.log, and Response.log ).   Events from App1 will be recorded in Request.log. Events from App2 will be recorded in Response.log.   Every request from App1 will... See more...
I have two separate logs ( Request.log, and Response.log ).   Events from App1 will be recorded in Request.log. Events from App2 will be recorded in Response.log.   Every request from App1 will receive a response from App2 within 30 minutes, and the response will be recorded in the Response.log file.  App2 occasionally fails to reply within 30 minutes. Each event has a distinct field, which will be recorded in both log files. How do I create an SPL query using these two distinct logs to search for the unsuccessful responses? Any help?
I am working with Splunk and Service Now... within Service Now we are able to pass variable field values by using the following notation: $result.my_cool_field$ So, if an event severity could cha... See more...
I am working with Splunk and Service Now... within Service Now we are able to pass variable field values by using the following notation: $result.my_cool_field$ So, if an event severity could change based on certain things... I may have SPL logic that creates a field named "event_severity" that can be anywhere between 1-4...  I then want to generate an alert within Splunk and have that open up an incident within Service Now... which I can have the incident severity change by putting the variable of $result.event_severity$.  This works great! Now I am creating some dashboards that will help look through all of our alerts and dump out titles, severity, permissions, etc... I am using the rest API to bring back the data... which works great, except that some of the alert severity values have been set at specific values (ie: "1", "2", etc)... and then some are variable, so the value is not a number, but instead a variable mentioned above ($result.event_severity$). The issue that I am running into, is that when I pull in all of the alerts, along with their severities... it is causing issues in the dashboard due to the field name being wrapped in Dollar Symbols ("$"). The dashboard then treats these field names as dashboard tokens... and then the dashboard component won't do anything, because it is waiting for "input"... in other words, it is waiting for some value that will never be set, to replace the field name that it thinks is a variable. Is there any way to escape the dollar symbols within the SPL when I am querying for field names? | rest /servicesNS/-/-/saved/searches | search disabled=0 eai:acl.app=my_cool_app severity IN ("1","$result.event_severity$") I need it to return alerts where severity=1 OR severity=$result.event_severity$... but the dashboard panel won't do it, because it is treating "$result.event_severity$" as a dashboard token. Any help is very appreciated!
Hi, I'm relatively new to Splunk so it's been a bit of a learning curve! I'm building a dashboard using Splunk Cloud Dashboard Studio that shows both overview and site specific visualisations - k... See more...
Hi, I'm relatively new to Splunk so it's been a bit of a learning curve! I'm building a dashboard using Splunk Cloud Dashboard Studio that shows both overview and site specific visualisations - key items being a map to show where all sites are, and once a site is selected some specific data. Basics of Dashboard: Site Name Dropdown (sets token $SiteName$): All (*) Site 1 Site 2 Map - configured with markers (lat/long) Single Value - configured to display the site name that was selected from dropdown ($SiteName$ token) Basic Search - <base search> | search "Site Name" = "$SiteName$"   Behaviour: Map and Single Value visualisations work as desired when a specific site is selected. <base search> | search "Site Name" = "Site 2"   Issue: When All is selected (sets token $SiteName$ to *) the search becomes: <base search> | search "Site Name" ="*" Map - shows all sites (desired) Single Value - shows 'Site 1' as it's the first returned value of the search (all sites are returned in the search results)   Any Suggestions?
So i have the following SPL query: <basic search> | chart count by path_template, http_status_code | addtotals fieldname=total | foreach 2* 3* 4* 5* [eval "percent_<<FIELD>>"=round(100*'<<FIELD>>'/... See more...
So i have the following SPL query: <basic search> | chart count by path_template, http_status_code | addtotals fieldname=total | foreach 2* 3* 4* 5* [eval "percent_<<FIELD>>"=round(100*'<<FIELD>>'/total,2),"<<FIELD>>"=if('<<FIELD>>'=0 OR '<<FIELD>>'=100, '<<FIELD>>','<<FIELD>>'." (".'percent_<<FIELD>>'."%)")] | fields - percent_* total Basically this is supposed to NOT display the percentage if it's 0 OR 100. However, running this query is still displaying 100% numbers. Do you know what is wrong in this condition checking? I even took out the OR and only had the condition check for 100 and it still didn't work. Thanks!