All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

After defining a "rising column" for the dbinput query, then dbinput will run the defined query to load data. How is the schedule for execution? select * from table A where fieldX > ? If it take le... See more...
After defining a "rising column" for the dbinput query, then dbinput will run the defined query to load data. How is the schedule for execution? select * from table A where fieldX > ? If it take less than 1 second to run the query, does it mean the next run will be kicked off right after the previous run? Any configuration for waiting time before kicking off next one?    
Hi team, when I use stats command to group and aggregration. For example:   <base query here> | bin span=1d _time | stats count(eval(autosave=1)) as autosave count(eval(autosave=0 OR autosave=1)... See more...
Hi team, when I use stats command to group and aggregration. For example:   <base query here> | bin span=1d _time | stats count(eval(autosave=1)) as autosave count(eval(autosave=0 OR autosave=1)) as total by _time, DC . There's a column named 'OTHER' created.  I want all the columns displayed, instead of group into 'OTHER' column if the number of columns are many.  I know that for timechart command there's a parameter useother=false can do this. but I can't find asuch a paramter for stats command , can you help?
Hi all! I'm trying to run simple search via Python SDK (Python 3.8.5, splunk-sdk 1.6.14). Examples that are presented on dev.splunk.com are clear but something goes wrong when I run search with my o... See more...
Hi all! I'm trying to run simple search via Python SDK (Python 3.8.5, splunk-sdk 1.6.14). Examples that are presented on dev.splunk.com are clear but something goes wrong when I run search with my own parameters The code is as simple as this search_kwargs_params = { "exec_mode": "blocking", "earliest_time": "2020-09-04T06:57:00.000-00:00", "latest_time": "2020-11-08T07:00:00.000-00:00", } search_query = 'search index=qwe1 trace=111-aaa-222 action=Event.OpenCase' job = self.service.jobs.create(search_query, **search_kwargs_params) for result in results.ResultsReader(job.results()): print(result) But search returns no results. When I run same query manually in Splunk web GUI it works fine. I've also tried to put all parameters in 'search_kwargs_params' dictionary, widened search time period and got some search results but they seem to be inappropriate to what I got in GUI. Can someone advise? 
hello,  i am trying to send wineventlogs from my machines to my clustered indexer and also send the same event logs but in Xml format to a heavy forwarder for third party.  my inputs.conf looks... See more...
hello,  i am trying to send wineventlogs from my machines to my clustered indexer and also send the same event logs but in Xml format to a heavy forwarder for third party.  my inputs.conf looks like this [WinEventLog://security] disabled = 0 index = xxxx renderXml = false [WinEventLog://security] disabled = 0 renderXml = true _TCP_ROUTING = heavy1 my outputs.conf is the following [tcpout:group1] indexerDiscovery = idxc1 autoLBVolume = 65536 [indexer_discovery:idxc1] master_uri = https://serverip:serverport pass4SymmKey = xxxx cxn_timeout = 300 [tcpout:heavyforwarder] defaultGroup = heavy1 [tcpout:heavy1] server = serverip:serverport does anyone know why it now does not send to my clustered indexers? know that i did put _TCP_ROUTING = group1 under the non Xml event logs in inputs.conf and still didnt work.  cheers in advance
Hi everybody   I installed the Clearpass TA application on my SH instance.  it collects logs via syslogs. So here is the configuration of my inputs.conf of the application [udp://4514] sourcetyp... See more...
Hi everybody   I installed the Clearpass TA application on my SH instance.  it collects logs via syslogs. So here is the configuration of my inputs.conf of the application [udp://4514] sourcetype = Aruba:CPPM:Syslog index = cg93_clearpass I restarted the service, but no log appears.  On my syslog server, I ran the following command : tcpdump -i eth0 port 4514 We can see that the logs have arrived at the splunk syslog server Do you know where it can come from?
Does sequence matter in search? from below 2 queries, which is recommended or both will perform with same performance? Query1: index=myindex AND "mystring" AND host=myprodhost* Query2: index=myin... See more...
Does sequence matter in search? from below 2 queries, which is recommended or both will perform with same performance? Query1: index=myindex AND "mystring" AND host=myprodhost* Query2: index=myindex AND host=myprodhost* AND "mystring" 
my event is below : 6|1|1|12|1907|1|1|1219079|1|1|126G|19079|1|1|12NB|190|1|1|126G774_100_NB|1907|1|1|126G|19079 sometimes A field will change the number as 5, 7, 8 like 5rows and 7 rows will come ... See more...
my event is below : 6|1|1|12|1907|1|1|1219079|1|1|126G|19079|1|1|12NB|190|1|1|126G774_100_NB|1907|1|1|126G|19079 sometimes A field will change the number as 5, 7, 8 like 5rows and 7 rows will come so that A is dynamic. A B C D E 6 1 1 650_3M 1921      1 1 1758 749      1 1 55 22      1 1 55 33      1 1 55 55      1 1 55 66   
Hi All,  I've just installed and been testing the Microsoft teams messages publication addon (4855 by @guilmxm ). It generates alerts in teams fine, but I cannot get it to output a potential action ... See more...
Hi All,  I've just installed and been testing the Microsoft teams messages publication addon (4855 by @guilmxm ). It generates alerts in teams fine, but I cannot get it to output a potential action to link back to the splunk results. My configuration is as follows Message Activity Title: Alert: $name$ - $job.resultCount$ events Message fields list: API_APPLICATION, API_ENDPOINT OpenURL Potential Action Name: View in Splunk OpenURL Potential Action URL: $results_link$   This results in a card which looks like this, with no action button I don't know where the "E" and "P" are coming from. I've played about with the settings in a number of ways, but cannot get an action to appear. Perhaps I'm missing something. Nothing notable is shown in the log.  If anybody has ideas or a working configuration they can share with me, that would be great.  Thank you
my search ... | stats values(something) as nothing |  outputlookup gemini I wish my query output to be saved in this outlook . But when I run the above I get error "The Lookup table gemini is i... See more...
my search ... | stats values(something) as nothing |  outputlookup gemini I wish my query output to be saved in this outlook . But when I run the above I get error "The Lookup table gemini is invalid". I think it is asking for lookup definition .. But How do I provide the definition ..when the lookup file is the output of my query ?
Hi, I would like to create a graph showing the average vulnerability age for each month by severity. I use this search :     | tstats `summariesonly` min(_time) as firstTime,max(_time) as lastT... See more...
Hi, I would like to create a graph showing the average vulnerability age for each month by severity. I use this search :     | tstats `summariesonly` min(_time) as firstTime,max(_time) as lastTime,count from datamodel=Vulnerabilities.Vulnerabilities by _time Vulnerabilities.signature,Vulnerabilities.dest, Vulnerabilities.severity span=1mon | `drop_dm_object_name("Vulnerabilities")` | where firstTime!=lastTime AND severity!="informational" | eval age=round((lastTime-firstTime)/86400) | eval _time=lastTime | timechart span=1mon avg(age) by severity | fields _time low medium high critical     However the age is calculated independently for each month. Meaning that if a vulnerability spans over multiple month its age will cap at 30 days maximum for each month in the graph. I'm unsure of how to make it cumulative 
Hello, We are using Splunk Enterprise 6.5 and we want to upgrade to the last version. What is the best way to do this ?   Thanks
Can anybody help me to create props.conf and transforms.conf files to correctly parse such logs?     "2020-10-08 09:35:56","Department1","Department2","113.8.10.134","113.8.10.132","Allowed","1 (A... See more...
Can anybody help me to create props.conf and transforms.conf files to correctly parse such logs?     "2020-10-08 09:35:56","Department1","Department2","113.8.10.134","113.8.10.132","Allowed","1 (A)","NOERROR","ant.com.","Search Engines","Networks","Networks","" "2020-10-08 09:35:56","Department1","Department2","113.8.10.134","113.8.10.132","Allowed","1 (A)","NOERROR","ant.com.","Search Engines,Malware","Networks","Networks","" "2020-10-08 09:35:56","Department1","Department2","113.8.10.134","113.8.10.132","Allowed","1 (A)","NOERROR","ant.com.","Search Engines,Malware, Network","Networks","Networks","" "2020-10-08 09:35:56","Department1","Department2","113.8.10.134","113.8.10.132","Allowed","1 (A)","NOERROR","ant.com.","Malware","Networks","Networks",""      As you see, here the log with the category field "Search Engines,Malware" should belong to both categories: Search Engines and Malware. So, the category field can consist of 1, 2, 3 or more values.
I am currently trying to use a regex to pick out the events with the date '2020XXXX' - I want the regex to search pick up any event date providing it does not have 'reg' following the '.' or '_' (pic... See more...
I am currently trying to use a regex to pick out the events with the date '2020XXXX' - I want the regex to search pick up any event date providing it does not have 'reg' following the '.' or '_' (pick out all the event dates below, except the first). How do I do this?  Current regex: 2020\d{4}[\.\_] List of different events\logs from the splunk search: _20201007144100_20200416_reg.zip _20201007103200_20201007.zip _20201007095000_20201007.zip _20201007092933_20201007.zip _20201007061717_20201007_txn.zip _20201007041719_20201007.zip
When I am submitting newer version of app with some fixes in splunkbase for releasing newer version getting below error. "The "id" field found in app.conf is already in use by another application" ... See more...
When I am submitting newer version of app with some fixes in splunkbase for releasing newer version getting below error. "The "id" field found in app.conf is already in use by another application" Please let me know what can be done here. Note:-It is an app.I upgraded the version in app.conf from 1.0.1 to 1.0.2.   Thanks, Manish.
Hello, I have this new task that I'm not sure how to go about it. I'm new to splunk so any help is really appreciated.  I want to create a dashboard that monitors all power issues that's been logged... See more...
Hello, I have this new task that I'm not sure how to go about it. I'm new to splunk so any help is really appreciated.  I want to create a dashboard that monitors all power issues that's been logged, as well as a dashboard for all remaining issues based on the message text below:  host_name=Contoso* OR host_name=Kontoso* AND message_text="Power supply 1 has failed or been turned off" OR message_text="Power supply 1 is okay" OR message_text="Power supply 2 has failed or been turned off" OR message_text="Power supply 2 is okay" OR "Power-module 0/PS0/M1/SP failure condition cleared" OR "0/PS0/M1/SP, state: FAILED" First off, the field "message_text" only captured four out of six messages, so these two were left out:  "Power-module 0/PS0/M1/SP failure condition cleared" OR "0/PS0/M1/SP, state: FAILED" I tried to see if i could create a new or update message_text to include these two, but it looked like it just added it to a new field that I couldn't find when I used the same filter afterwards.  Is it here that I use the eval-function to compare and remove logs that has been cleared? 
Hi   I'm new to the splunk community I was trying to generate PDF report from the dashboard: Export > Schedule PDF delivery > Send Test Email I got the following error on my email after almost ho... See more...
Hi   I'm new to the splunk community I was trying to generate PDF report from the dashboard: Export > Schedule PDF delivery > Send Test Email I got the following error on my email after almost hour of processing: An error occurred while generating the PDF. Please see python.log for details Log shows:       2020-10-07 21:49:44,499 +0300 INFO sendemail:1163 - sendemail pdfgen_available = 1 2020-10-07 21:49:44,499 +0300 INFO sendemail:1309 - sendemail:mail effectiveTime=None 2020-10-07 22:49:44,522 +0300 ERROR __init__:522 - Socket error communicating with splunkd (error=('The read operation timed out',)), path = /services/pdfgen/render 2020-10-07 22:49:44,523 +0300 ERROR sendemail:1170 - An error occurred while generating a PDF: Failed to fetch PDF (SplunkdConnectionException): Splunkd daemon is not responding: ("Error connecting to /services/pdfgen/render: ('The read operation timed out',)",) 2020-10-07 22:49:44,523 +0300 INFO sendemail:1189 - buildAttachments: not attaching csv as there are no results 2020-10-07 22:49:44,957 +0300 INFO sendemail:140 - Sending email. subject="Splunk Dashboard: 'Traffic Dashboard PDF - 7 Days'", results_link="None", recipients="[u'my@email.com']", server="192.168.8.10"       And when I clicked on preview PDF in the same page where was the Send Test Email I got the following error after it was processing for almost an hour: Unable to render PDF. Exception raised while trying to prepare "Traffic Dashboard PDF - 7 Days" for rendering to PDF. [HTTP 401] Client is not authenticated Exception raised while trying to prepare "Traffic Dashboard PDF - 7 Days" for rendering to PDF. [HTTP 401] Client is not authenticated Exception raised while trying to prepare "Traffic Dashboard PDF - 7 Days" for rendering to PDF. [HTTP 401] Client is not authenticated Exception raised while trying to prepare "Traffic Dashboard PDF - 7 Days" for rendering to PDF. [HTTP 401] Client is not authenticated Exception raised while trying to prepare "Traffic Dashboard PDF - 7 Days" for rendering to PDF. [HTTP 401] Client is not authenticated Exception raised while trying to prepare "Traffic Dashboard PDF - 7 Days" for rendering to PDF. [HTTP 401] Client is not authenticated Exception raised while trying to prepare "Traffic Dashboard PDF - 7 Days" for rendering to PDF. [HTTP 401] Client is not authenticated   We have deployed Splunk with 8(4x2)CPU and 16GB memory. And I noticed whenever I generate pdf or search something the CPU goes to 100%... Is this the problem ? And why Splunk consuming so much CPU ?
sample data _time source name appId state 10/8/20 7:53:27.090 AM xyz Transform-x-2020-10-08 1001 success 10/8/20 7:53:16.890 AM xyz Transform-x-2020-10-08 ... See more...
sample data _time source name appId state 10/8/20 7:53:27.090 AM xyz Transform-x-2020-10-08 1001 success 10/8/20 7:53:16.890 AM xyz Transform-x-2020-10-08 1001 running 10/8/20 7:53:06.490 AM xyz Transform-x-2020-10-08 1001 started 10/8/20 7:53:27.090 AM xyz copy-y-2020-10-08 203 success 10/8/20 7:53:16.890 AM xyz copy-y-2020-10-08 203 running 10/8/20 7:53:06.490 AM xyz copy-y-2020-10-08 203 started   there are 3 rows with same name(Transform-x-),appId (1001) and other 3 rows with same name(copy-y-) appId(203) , need to fetch the latest for each appId expected output: _time source name appId state 10/8/20 7:53:27.090 AM xyz Transform-x-2020-10-08 1001 success 10/8/20 7:53:27.090 AM xyz copy-y-2020-10-08 203 success can someone please help with as im new to splunk
Hello ! Need your help splunkers ! I want to append or create a csv for each rows of my query I do this for assignate the fields to the file_name  :    | .. query | foreach * [ | eval c... See more...
Hello ! Need your help splunkers ! I want to append or create a csv for each rows of my query I do this for assignate the fields to the file_name  :    | .. query | foreach * [ | eval customer_name = if("<<FIELD>>" = "nom_site", "DATA_".'<<FIELD>>'.".csv", "Error") | outputlookup append=true customer_name ]   Error in 'outputlookup' command: The lookup table 'customer_name' is invalid.  (I try with $customer_name$ and $customer_name too). And I do a second try like this, with a lookup search:    | ... query | foreach * [ | eval customer_name = if("<<FIELD>>" = "nom_site", '<<FIELD>>', "Error") | outputlookup append=true [| inputlookup customers.csv | search name=customer_name | eval name_file = "DATA_".name.".csv" | return $name_file)] ]    Error : Error in 'outputlookup' command: A lookup table name or file name is required. And finaly, I do with a call of a macro (who contains the precedent lookup search)  :    | ... query | foreach * [ | eval customer_name = if("<<FIELD>>" = "nom_site", '<<FIELD>>', "Error") | outputlookup append=true `name_to_csv(customer_name)` ]     Thanks for your help. Have a  nice day
Hi, needs some help with timestamp recognition problem. I have two almost identical events that are sendt over udp to splunk; in one the epoch timestamp is recognized correctly, in the other splunk u... See more...
Hi, needs some help with timestamp recognition problem. I have two almost identical events that are sendt over udp to splunk; in one the epoch timestamp is recognized correctly, in the other splunk uses time received. Does anyone have any idea why this is happening? The events are generated by a powershell script.    
Hi team, I have below query: sourcetype=xxxx AND "POST /123?123_form_type=review&itrModule=cherie*" | rex field=_raw "POST\s+(?<uri>.*)HTTP.*name\=(?<name>.*?)\&" | eval saveMode=if(uri like "%ch... See more...
Hi team, I have below query: sourcetype=xxxx AND "POST /123?123_form_type=review&itrModule=cherie*" | rex field=_raw "POST\s+(?<uri>.*)HTTP.*name\=(?<name>.*?)\&" | eval saveMode=if(uri like "%cherie=true%", "1", "0") | timechart span=1d count(eval(saveMode=1)) as autosave count(eval(saveMode=0 OR saveMode=1)) as total by DC Query result in splunk:   _time autosave: DC23 autosave: DC41 autosave: DC44 total: DC23 total: DC41 total: DC44 2020-10-07   247 50 87 500 600 700 2020-10-08 304 0 12 500 600 700   But  the expected result  I want is: 1. put the columns with same DC together for easy to compare  2. add a new column autosave%: DCxx which is calculated out by autosave/total*100, I tried with eval command after timechart command, but the column doesn't display. | timechart span=1d count(eval(saveMode=1)) as autosave count(eval(saveMode=0 OR saveMode=1)) as total by DC | eval autosave%=round((autosave/total) * 100, 2) So the expected table returned should be like this. Is this achievable? _time autosave: DC23 total: DC23 autosave%: DC23 autosave: DC41 total: DC41 autosave%: DC41 autosave: DC44 total: DC44 autosave%: DC44 2020-10-07 247 500   50 600   87 700   2020-10-08 304 500   0 600   12 700       Thanks, Cherie