All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a set if CSV files getting created every day, none of the CSV files have any default data or time printed within it, when i index the files i could see the data is getting index with different... See more...
I have a set if CSV files getting created every day, none of the CSV files have any default data or time printed within it, when i index the files i could see the data is getting index with different time stamps over the past date. I have also used crcSalt=<SOURCE> Soucettype is CSV and set to Current,  But all the file has created date and last modified data is the same (example today's date.) but the data gets indexed with different timestamp  
Our Tenable scanner was recently upgraded from the standalone Nessus product to Security Center. There is a large data schema change from the old Tenable TA to this new one. In the old data, it conta... See more...
Our Tenable scanner was recently upgraded from the standalone Nessus product to Security Center. There is a large data schema change from the old Tenable TA to this new one. In the old data, it contained scan info like Scan Group Names and the IP Ranges scanned - is it possible to pull the same info with this new TA?
Does anyone know how the outputlookup command is configured? commands.conf does not reference a python script for it. I want to change how new files are created so that they are private and assigned ... See more...
Does anyone know how the outputlookup command is configured? commands.conf does not reference a python script for it. I want to change how new files are created so that they are private and assigned to an owner. 
Hello Splunkers, This is my goal : A table with 3 column (field, field_type, field_len) and export it as CSV and CSV file name must be the sourcetype used in input (as a condition). field = list ... See more...
Hello Splunkers, This is my goal : A table with 3 column (field, field_type, field_len) and export it as CSV and CSV file name must be the sourcetype used in input (as a condition). field = list all field for the sourcetype field_type = string, bool, int, etc. field_len = field length The issue is that I must launch the search for each sourcetype in my indexes (that's a lot). My CSV file is that form (it lists all sourcetype I use) :   Sourcetype sourcetype1 sourcetype2 sourcetype3 ... sourcetypeN   My query is actually like :   index=* sourcetype=MY_SOURCETYPE | fieldsummary | eval field_type=typeof(field), field_len=len(field) | table field, field_type, field_len | dedup field   I want to add the multiple export to CSV and use a CSV in input instead of sourcetype="MY_SOURCETYPE" It could be like :   index=main sourcetype=$sourcetype_from_csv_file$ | fieldsummary | eval field_type=typeof(field), field_lgth=len(field) | table field, field_type, field_lgth | depup field | outputcsv $sourcetype_from_csv_file$.csv How can I build this request as I don't know how to export in search / how to use a csv as input ?  
Hello, I'm trying to configure two different dropdown menus - the idea is to give option to the user, either he picks a value from the first dropdown (all videos - $dropdown_token$) OR he picks one ... See more...
Hello, I'm trying to configure two different dropdown menus - the idea is to give option to the user, either he picks a value from the first dropdown (all videos - $dropdown_token$) OR he picks one of the top 10 ($field1$).   The problem is I don't seem to know how to insert this information on my visuals - the dashboard keeps running only if I select the first option/dropdown menu. I tried different ways of writing the query, such as: <panel> <single> <title>Unique Viewers</title> <search> <query>index=index Operation="views" | search ResourceTitle="$dropdown_token$","$field1$" |stats distinct_count(UserId)</query> <earliest>$picktime_token.earliest$</earliest> <latest>$picktime_token.latest$</latest> <sampleRatio>1</sampleRatio> </search> I also tried | search ResourceTitle="$dropdown_token$" OR "$field1$" I know it's a beginners question but I appreciate if you could help me.   THanks  
Hello, I'm trying to put a query together to monitor/view emails being sent externally to a personal domain.  i.e. johnsmith@corporation.com  to john@smith.com  or johnsmith@personalbusiness.com  ... See more...
Hello, I'm trying to put a query together to monitor/view emails being sent externally to a personal domain.  i.e. johnsmith@corporation.com  to john@smith.com  or johnsmith@personalbusiness.com  I'm not looking for external personal email addresses like johnsmith@gmail  or hotmail.com, etc. Specifically domains that have some correlation to the users name that appear to be a personal domain.  index=***this is a corp. email index*** (from_domain="corp.com" AND rcpt_domain="??????") Any help is appreciated! Thanks!
Hello, I am not able to get my data into the newly created "varlog" index. The index is an event index and active in the system. I am not able to see any issues with it. I have the following inputs.... See more...
Hello, I am not able to get my data into the newly created "varlog" index. The index is an event index and active in the system. I am not able to see any issues with it. I have the following inputs.conf stanza:   [default] host = ccd03v005084 [monitor:///var/log/*] index = varlog disabled = 0 interval = 15 sourcetype = syslog   Interesting, when I change "varlog" with "main" I am getting the data into the "main". Can it be, that this is due to the Splunk Enterprise Trial license that I am using? Would the trial license allow creation and data indexing in the new indexes? If not, how would I investigate it further? There is no sign of any problems neither in the splunkd.log of the forwarder nor anywhere else I checked. Kind Regards, Kamil
Hello, I have an alert which is scheduled to run at 8 AM every day using a cron expression. It checks events from different site locales. Since this alert must be scheduled for different regions (E... See more...
Hello, I have an alert which is scheduled to run at 8 AM every day using a cron expression. It checks events from different site locales. Since this alert must be scheduled for different regions (EU, APAC, NA, etc.), is it possible to run the alert independently for each time zone?  (example: daily alerts that run at 8AM to be able to run their checks at 8AM in each market's timezone) Regards  
I'm looking into a way to use Splunk as a data integration tool - so that services like Salesforce can get information from Splunk, instead of relying on my server to call their API. My logic is th... See more...
I'm looking into a way to use Splunk as a data integration tool - so that services like Salesforce can get information from Splunk, instead of relying on my server to call their API. My logic is that if I report every event to Splunk, and Splunk has a REST API, then why report to additional services and not have them read from Splunk (or Splunk write to them). I'd love to hear suggestions if anyone's accomplished such a setup - and has insights of considerations such as access tokens, API limitations, data enrichment, shortcuts (like cool Splunk apps that facilitate this) etc. Examples that demonstrate different ways I thought to take: 1. I set up an alert for a specific kind of Splunk log (e.g. log for a user that deleted their profile) and the alert action uses script/webhook to make a POST request to Salesforce, letting it know a lead should be deleted. 2. I define a saved search/report that aggregates some numbers from logs describing user activities  - and set up a service to poll this via Splunk Cloud REST API and update accordingly.
I can't quarantine device by automation. Action "set quarantine approved" failed.   Message: Error from server. Status Code: 422 Data from server: {"details":[{"type":"error","code":2015,"path":"... See more...
I can't quarantine device by automation. Action "set quarantine approved" failed.   Message: Error from server. Status Code: 422 Data from server: {"details":[{"type":"error","code":2015,"path":"state","message":"Additional properties are not allowed"}],"route":"/hx/api/v3/hosts/agent_id/containment","message":"Unprocessable Entity"}
We regularly perform patching activity on Windows servers under monitoring in Splunk where we have to initiate maintenance window for 300 to 400 servers on one go. We dont want to put entire service ... See more...
We regularly perform patching activity on Windows servers under monitoring in Splunk where we have to initiate maintenance window for 300 to 400 servers on one go. We dont want to put entire service into maintenance which will affect monitoring on other Windows servers. We tried using REST API but it's not quite useful because start and end time in epoch, servers names cannot be used directly. Please let me know whether we have any custom solution to achieve this requirement.
Hi All,  I am stuck at a scenario where if user using search in a specific app, then that app folders name should be shown as a filed.  Is there any way to get current app name using REST or METADA... See more...
Hi All,  I am stuck at a scenario where if user using search in a specific app, then that app folders name should be shown as a filed.  Is there any way to get current app name using REST or METADATA or any other command ? Thanks.   
I'm trying to get specific details and need some help. I have two location-based input types, one for the state, and one for the city. I want to see the following details about the city: number of ... See more...
I'm trying to get specific details and need some help. I have two location-based input types, one for the state, and one for the city. I want to see the following details about the city: number of station ID's, station ID, station name, name of the city, name of the state.
I am trying on date compare but i am unable to get the exact output The condition for Date Compare: if(First_Date.before(Verifed_Date ) consider the value as 1, else if(First_Date.equals(Ver... See more...
I am trying on date compare but i am unable to get the exact output The condition for Date Compare: if(First_Date.before(Verifed_Date ) consider the value as 1, else if(First_Date.equals(Verifed_Date ) consider the values as 0, else consider the r value as 2; if D2_ExecutionDate is null or empty mean verified should be null Verified Condition is: (Date Compare(Verifed_Date ,First_Date == 0 || Verifed_Date ,First_Date == 1) && (Verifed_Date ,Second_Date == 2 || Verifed_Date, Second_Date == 0)) get the verified values from PhaseMapping verified = R06.1 For reference i attached a screenshot..please help me with this Thanks in advance Renuka
Hi,  I have scenario where index and sourcetype are same and i am tryng below conditions. chart dc(run) OVER app by event---- this will give me dc of run for each app for each event stats dc(run) ... See more...
Hi,  I have scenario where index and sourcetype are same and i am tryng below conditions. chart dc(run) OVER app by event---- this will give me dc of run for each app for each event stats dc(run) as run by app-- this will give me dc of run by app........ i used join to get this done ike below, but this is taking lot time to run query, base search... |chart dc(run) OVER app by event | join app         [search source =mysource | stats dc(run) as run by app ]  |eval new_val = run - event1- event2  | fields app event1 event2 new_val new run    
I am using Network Toolkit app for pinging some of my servers and getting the status as "Down" or "UP" and using lookup file I am showing other details of that server such as Area, Location, Data_Por... See more...
I am using Network Toolkit app for pinging some of my servers and getting the status as "Down" or "UP" and using lookup file I am showing other details of that server such as Area, Location, Data_Port and Time_server. But along with this, I have to show the ping status of "Time_Server" also into the same table panel. I don't want to create separate panels for Time Server health status. Attaching screenshot for your reference, so basically there should be one more column named "Time_Server_Status" showing "Down" or "UP" at the end of the table ? I have installed Network Toolkit app on HF and collecting the data into my index, the server ping is getting collected as sourcetype "ping", and I am also collecting ping status for Time_Server IPs in different sourcetype as timeserver_ping. In our environment, I want to monitor 200 server IPs and 10 Time_Server IPs, and I am using lookup file to show all the details as mentioned above. Hence, how can I show both Server and Time_Server IP ping status in the same table ? Please advise ?      
Hi Team, Currently we are planning to upgrade our core Splunk Cloud instance from 7.2.9.1 to 8.x and above so when checked the Splunk Upgrade Dashboard i got around 20+ servers  for which the Univer... See more...
Hi Team, Currently we are planning to upgrade our core Splunk Cloud instance from 7.2.9.1 to 8.x and above so when checked the Splunk Upgrade Dashboard i got around 20+ servers  for which the Universal Forwarder  needs to be upgraded for those machines since currently they are running with 6.5.1 version. So when i checked the OS for those servers it seems to be RHEL 5 (5.11) version and the kernel version is 2.6.18-439.el5 so when i checked the following link: https://www.splunk.com/page/previous_releases/universalforwarder#x86_64linux   I can able to see there are latest version of Universal Forwarder available so can i go ahead and upgrade with latest version?   Also want to know is the Kernel version (2.6.18-439.el5) is 32 bit or 64 bit as well and which one would be the recommended UF i can go ahead and upgrade so that it can clear the road blocker for the core Splunk Cloud upgrade from 7.2.91. to 8.x    
Dear Team, We are used p25() and p75() functions to retrieve Percentile values for a range of values in Splunk. To validate the Percentile value we calculated the Percentile values in excel by using... See more...
Dear Team, We are used p25() and p75() functions to retrieve Percentile values for a range of values in Splunk. To validate the Percentile value we calculated the Percentile values in excel by using the Percentie.Exc() and Percentile.Inc() function. From the comparision we came to know that the Percentile values are same in Splunk and Excel for the Percentile.Inc(). But the Percentile.Exc() function values differs from the Splunk Percentile Values. Do we have Percentile function like excel (i.e, Percentile.Exc() and Percentile.Inc()) in Splunk? Attached Excel sheet for your reference.
I have created a dashboard panel that shows all the users with failed logins in the form of a timechart I'm trying to change the 'OTHERS' value on my y-axis to show all the other users. How ... See more...
I have created a dashboard panel that shows all the users with failed logins in the form of a timechart I'm trying to change the 'OTHERS' value on my y-axis to show all the other users. How do I change my search query (below)  to show the rest of the users ?
My query searches for eventcode and displays (host, time, task category, message) i want to use some color to highlight all  same hosts generating multiple eventcode?? please help with the query