All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi team,   May I ask a question about Splunk and ESXI compatibility, we are going to install latest Splunk Enterprise Ver 9.0.1 and Splunk Enterprise Security Ver 7.0.1, and install Splunk App. ... See more...
Hi team,   May I ask a question about Splunk and ESXI compatibility, we are going to install latest Splunk Enterprise Ver 9.0.1 and Splunk Enterprise Security Ver 7.0.1, and install Splunk App. we are going to install ESXI Version 7.0, the latest release is ESXI 7.0 UPdate 3f. Are ( Splunk Enterprise Ver 9.0.1 and Splunk Enterprise Security Ver 7.0.1, and Splunk App.) compatible with ESXI 7.0 Update 3f ?   I found https://docs.splunk.com/Documentation/AddOns/released/VMW/Hardwareandsoftwarerequirements and it says ESXI 7.0, no more information about Update. thanks.
Can anyone help me with extracting/parsing the multivalue fields  in sample event below using props and transforms conf. {\"ts\":1660880406.308522,\"uid\":\"CKFf5h2a9xFmkGFeFj\",\"id.orig_h\":\"10.... See more...
Can anyone help me with extracting/parsing the multivalue fields  in sample event below using props and transforms conf. {\"ts\":1660880406.308522,\"uid\":\"CKFf5h2a9xFmkGFeFj\",\"id.orig_h\":\"10.10.10.16\",\"id.orig_p\":64179,\"id.resp_h\":\"8.8.4.4\",\"id.resp_p\":53,\"proto\":\"udp\",\"trans_id\":50808,\"rtt\":0.12951111793518067,\"query\":\"discord.com\",\"qclass\":1,\"qclass_name\":\"C_INTERNET\",\"qtype\":1,\"qtype_name\":\"A\",\"rcode\":0,\"rcode_name\":\"NOERROR\",\"AA\":false,\"TC\":false,\"RD\":true,\"RA\":true,\"Z\":0,\"answers\":[\"162.159.135.232\",\"162.159.138.232\",\"162.159.137.232\",\"162.159.136.232\",\"162.159.128.233\"],\"TTLs\":[300.0,300.0,300.0,300.0,300.0]  
I would like to take a copy of my Production standalone Splunk instance and stand it up as a development machine. My Production machine is running on Linux and I'd like to move a copy to a new Linu... See more...
I would like to take a copy of my Production standalone Splunk instance and stand it up as a development machine. My Production machine is running on Linux and I'd like to move a copy to a new Linux server (different hostname, domain). Since i don't want to move the data stored in the indexes, I was wondering whether i can just copy the contents of the $SPLUNK_HOME/etc folder? or are there further files that need copying across (e.g kvstore settings)? ... or do i really need to copy the whole contents of $SPLUNK_HOME and then delete the index data from the development machine after the copy has finished?
on splunk cloud 8.2.2202.2 issuing the command as follows I get an error one times out of four -    | inputlookup append=t ethos_vulnaction_generic Last 30 minutes   Error in 'i... See more...
on splunk cloud 8.2.2202.2 issuing the command as follows I get an error one times out of four -    | inputlookup append=t ethos_vulnaction_generic Last 30 minutes   Error in 'inputlookup' command: External lookup table 'inputlookup' returned error code 0. Results might be incorrect. The search job has failed due to an error. You may be able view the job in the Job Inspector. | inputlookup append=t ethos_vulnaction_generic restarted splunk - no luck Not sure how to decipher job inspector - but this inconsistency - sometimes it work sometimes it doesn't is strange. kvstore was populated with json, and lookup; does have a filter in it - NOT asset_specific = "true" I tried removing the filter seeing if this impacted the results but I still get an error about one time in four..  if i do a rest query of the kvstore in json it looks  healthy to me... besides if I take this filter out I still get stability issues         "asset_specific": true, A cut down example of the json used to populate the record. I do refer explicitly to the field in the lookup as details.plugin_id which the lookup command seems to like... a snippet of json { "action_description": "zulu specific", "asset_specific": true, "details": { "plugin_id": [ "153989" ] } }
I have two separate logs ( Request.log, and Response.log ).   Events from App1 will be recorded in Request.log. Events from App2 will be recorded in Response.log.   Every request from App1 will... See more...
I have two separate logs ( Request.log, and Response.log ).   Events from App1 will be recorded in Request.log. Events from App2 will be recorded in Response.log.   Every request from App1 will receive a response from App2 within 30 minutes, and the response will be recorded in the Response.log file.  App2 occasionally fails to reply within 30 minutes. Each event has a distinct field, which will be recorded in both log files. How do I create an SPL query using these two distinct logs to search for the unsuccessful responses? Any help?
I am working with Splunk and Service Now... within Service Now we are able to pass variable field values by using the following notation: $result.my_cool_field$ So, if an event severity could cha... See more...
I am working with Splunk and Service Now... within Service Now we are able to pass variable field values by using the following notation: $result.my_cool_field$ So, if an event severity could change based on certain things... I may have SPL logic that creates a field named "event_severity" that can be anywhere between 1-4...  I then want to generate an alert within Splunk and have that open up an incident within Service Now... which I can have the incident severity change by putting the variable of $result.event_severity$.  This works great! Now I am creating some dashboards that will help look through all of our alerts and dump out titles, severity, permissions, etc... I am using the rest API to bring back the data... which works great, except that some of the alert severity values have been set at specific values (ie: "1", "2", etc)... and then some are variable, so the value is not a number, but instead a variable mentioned above ($result.event_severity$). The issue that I am running into, is that when I pull in all of the alerts, along with their severities... it is causing issues in the dashboard due to the field name being wrapped in Dollar Symbols ("$"). The dashboard then treats these field names as dashboard tokens... and then the dashboard component won't do anything, because it is waiting for "input"... in other words, it is waiting for some value that will never be set, to replace the field name that it thinks is a variable. Is there any way to escape the dollar symbols within the SPL when I am querying for field names? | rest /servicesNS/-/-/saved/searches | search disabled=0 eai:acl.app=my_cool_app severity IN ("1","$result.event_severity$") I need it to return alerts where severity=1 OR severity=$result.event_severity$... but the dashboard panel won't do it, because it is treating "$result.event_severity$" as a dashboard token. Any help is very appreciated!
Hi, I'm relatively new to Splunk so it's been a bit of a learning curve! I'm building a dashboard using Splunk Cloud Dashboard Studio that shows both overview and site specific visualisations - k... See more...
Hi, I'm relatively new to Splunk so it's been a bit of a learning curve! I'm building a dashboard using Splunk Cloud Dashboard Studio that shows both overview and site specific visualisations - key items being a map to show where all sites are, and once a site is selected some specific data. Basics of Dashboard: Site Name Dropdown (sets token $SiteName$): All (*) Site 1 Site 2 Map - configured with markers (lat/long) Single Value - configured to display the site name that was selected from dropdown ($SiteName$ token) Basic Search - <base search> | search "Site Name" = "$SiteName$"   Behaviour: Map and Single Value visualisations work as desired when a specific site is selected. <base search> | search "Site Name" = "Site 2"   Issue: When All is selected (sets token $SiteName$ to *) the search becomes: <base search> | search "Site Name" ="*" Map - shows all sites (desired) Single Value - shows 'Site 1' as it's the first returned value of the search (all sites are returned in the search results)   Any Suggestions?
So i have the following SPL query: <basic search> | chart count by path_template, http_status_code | addtotals fieldname=total | foreach 2* 3* 4* 5* [eval "percent_<<FIELD>>"=round(100*'<<FIELD>>'/... See more...
So i have the following SPL query: <basic search> | chart count by path_template, http_status_code | addtotals fieldname=total | foreach 2* 3* 4* 5* [eval "percent_<<FIELD>>"=round(100*'<<FIELD>>'/total,2),"<<FIELD>>"=if('<<FIELD>>'=0 OR '<<FIELD>>'=100, '<<FIELD>>','<<FIELD>>'." (".'percent_<<FIELD>>'."%)")] | fields - percent_* total Basically this is supposed to NOT display the percentage if it's 0 OR 100. However, running this query is still displaying 100% numbers. Do you know what is wrong in this condition checking? I even took out the OR and only had the condition check for 100 and it still didn't work. Thanks!  
I need to collect data using HEC from an Internet source into my on-prem Splunk environment. It looks like I can run HEC on a Heavy Forwarder and then forward the collected data to my indexer cluster... See more...
I need to collect data using HEC from an Internet source into my on-prem Splunk environment. It looks like I can run HEC on a Heavy Forwarder and then forward the collected data to my indexer cluster. I will use https so the communication is encrypted and DNS and networking/firewall shouldn't be a problem. Any gotchas or issues using a HF with HEC in a DMZ to collect and forward data to an on-prem indexer cluster?
A dashboard which uses tabs.js and tabs.css worked all along and suddenly we get the error message -    A custom JavaScript error caused an issue loading your dashboard, likely due to the das... See more...
A dashboard which uses tabs.js and tabs.css worked all along and suddenly we get the error message -    A custom JavaScript error caused an issue loading your dashboard, likely due to the dashboard version update. See the developer console for more details.     What can it be?  
Dear splunk community: I have the following search query which basically shows the number of counts and percentage of url (Y-Axis) http status code (X-Axis): <basic search> | chart count by url, ... See more...
Dear splunk community: I have the following search query which basically shows the number of counts and percentage of url (Y-Axis) http status code (X-Axis): <basic search> | chart count by url, http_status_code | addtotals fieldname=total | foreach 2* 3* 4* 5* [ eval "percent_<<FIELD>>"=round(100*'<<FIELD>>'/total), "<<FIELD>>"='<<FIELD>>'." (".'percent_<<FIELD>>'."%)" ] | fields - percent_* total Here is a sample of the above query result: Now, i need to insert an if clause so that if the percentage is either 0 OR 100, then do NOT display the percentage. How would i do that to the above query to get this result? Thank you very much for your help!  
Hello, I need some guidance to install CyberArk TA in a single-server SPLUNK enterprise environment. How would I proceed with this installation process? Any help will be highly appreciated. Thank y... See more...
Hello, I need some guidance to install CyberArk TA in a single-server SPLUNK enterprise environment. How would I proceed with this installation process? Any help will be highly appreciated. Thank you so much.
I need only url column results in blue remaining 4 fields in just text (i.e Black) color    i want something like below     <row> <panel> <table> <search> <... See more...
I need only url column results in blue remaining 4 fields in just text (i.e Black) color    i want something like below     <row> <panel> <table> <search> <query>|inputlookup testing.csv | eval ClickHere= url | table _time duration_seconds dv_assignment_group dv_number dv_u_substate url </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="number" field="dv_u_substate"></format> <drilldown> <condition field="url"> <link target="_blank">$row.url|n$</link> </condition> <condition field="*"></condition> </drilldown> </table> </panel> </row>
I am making custom Splunk command that looks to see if today a holiday and changes the threshold if it is. if it's a holiday, would I just use "if" command to change the threshold value. Current py... See more...
I am making custom Splunk command that looks to see if today a holiday and changes the threshold if it is. if it's a holiday, would I just use "if" command to change the threshold value. Current python code  # changing threshold value if today != us_holidays: int() >>> "123131241".isdigit() True  
We have data on a Splunk instance that needs to be retained for audit purposes. The new instance owner will not allow the old data to be ingested. What would be method to retain data for required tim... See more...
We have data on a Splunk instance that needs to be retained for audit purposes. The new instance owner will not allow the old data to be ingested. What would be method to retain data for required time period without retaining old instance?
@douglashurdI had eStreamer Add-on v5.1.3 installed and believe the bytes-in/bytes-out and packets-in/packets-out are inverted. From cisco:firepower:syslog raw event - SrcIP: [Internet-IP], DstIP... See more...
@douglashurdI had eStreamer Add-on v5.1.3 installed and believe the bytes-in/bytes-out and packets-in/packets-out are inverted. From cisco:firepower:syslog raw event - SrcIP: [Internet-IP], DstIP: [Firewall-IP], InitiatorPackets: 1, ResponderPackets: 0, InitiatorBytes: 54, ResponderBytes: 0 parsed fields - src_ip = [Internet-IP], dest_ip = [Firewall-IP], packets_received = 0, bytes_out = 54, From cisco:estreamer:data raw event - src_ip= [Internet-IP], dest_ip= [Firewall-IP], src_pkts=1, dst_pkts=0, src_bytes=54, dest_bytes=0 parsed fields - src_ip = [Internet-IP], dest_ip = [Firewall-IP], packets_in=1 , packets_out=0, bytes_in=54, bytes_out=0 As you can see in the parsed events, that the syslog event indicates 54 bytes sent outbound, while the eStreamer logs indicates the bytes are inbound. I believe the the raw logs in both cases indicate that the bytes were sent outbound, so I think the cisco:estreamer:data parser may be incorrect here. Thanks, Gord T.
Hi all, I am trying to extract threshold values I have defined for some BTs under an app. I can find API calls to get controller level settings, health rules, apps in a controller etc. but nothing ... See more...
Hi all, I am trying to extract threshold values I have defined for some BTs under an app. I can find API calls to get controller level settings, health rules, apps in a controller etc. but nothing like threshold settings (the ones that are used to classify a transaction as slow/very slow). Anyone aware of a solution for this? regards Philippe
I need to know where i can view the source index of the event that Splunk Enterprise Security take to make an alert, because is showing me that is from risk index.
Hi, Can someone help me with field extraction for string : /home/mysqld/databasename/audit/audit.log I want to extract databasename as Database to be used  i have written regex but getting erro... See more...
Hi, Can someone help me with field extraction for string : /home/mysqld/databasename/audit/audit.log I want to extract databasename as Database to be used  i have written regex but getting error, can someone help with correct regex: rex field=source "\/home\/\/mysqld\//(?&lt;Database&gt;.*)/audit\/"
We're using a Universal Forwarder, I'm manually updating the inputs.conf file, I do not see the changes being reflected when searching on Splunk UI. I have restarted the forwarder. I'm not sure