All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I need to make sure that a file is delivered every 10 minutes.  It always arrives 5 seconds after the top of the 10 min mark (6:00:05, 6:10:05... 6:50:05, 7:00:05 etc.)  between 6am-3pm on weekdays. ... See more...
I need to make sure that a file is delivered every 10 minutes.  It always arrives 5 seconds after the top of the 10 min mark (6:00:05, 6:10:05... 6:50:05, 7:00:05 etc.)  between 6am-3pm on weekdays.   This is the closest thing I've been able to come up with   */11 6-15 * * 1-5   I can't use */10 because the file arrives 5 seconds after the 10 minute marks, so I used 11 and set the time range as 5 minutes so that last run of the hour catches the XX:50:05 file.  The problem is that this solution always misses the first file that arrives at the top of the hour (XX:00:05) since it runs every 11 minutes.   For whatever reason, at the beginning of each hour it runs immediately but then misses the first file since the file arrives 5 seconds later.  Can anyone think of a better solution or do I just have to create a second alert for those top-of-the-hour files? I can't seem to find a way to delay the search by a few seconds.  And how can I mute the erroneous triggers from the first alert?
I have a ton or reports on the Ent. & like to synch them with ES to save time recreating them. Which is better synching or cloning? I 'd like to Synch them. Please advise. Thx & Happy 2022.
Hi, Is anyone syncing detection content (searches) on SIEM Rules (https://www.siemrules.com/) to their Splunk instance? I'm looking at using their API to build an integration (https://docs.siemru... See more...
Hi, Is anyone syncing detection content (searches) on SIEM Rules (https://www.siemrules.com/) to their Splunk instance? I'm looking at using their API to build an integration (https://docs.siemrules.com/developers/api-intro), but wondering if anything already existed? No apps on Splunkbase.
This code import splunklib.client as client host = "127.0.0.1" port = "8000" username = "---" password = "----" service = client.connect(username=username,password=password,host=host,port=port,... See more...
This code import splunklib.client as client host = "127.0.0.1" port = "8000" username = "---" password = "----" service = client.connect(username=username,password=password,host=host,port=port,scheme = https) for app in service.apps: print(app.name) produces  SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1125)  
Hi All, I am completely newbie into this splunk I wanted to know how to create reports in splunk that will provide daily log sources  Reporting and events count in a perticular time frame or last ... See more...
Hi All, I am completely newbie into this splunk I wanted to know how to create reports in splunk that will provide daily log sources  Reporting and events count in a perticular time frame or last 24 hours please help me here thank you for your support   
Hi everyone, I have an error on my splunk with the below description: "The lookup table '*' does not exist or is not available." The lookup name is not mentioned, and the only thing I have is the ... See more...
Hi everyone, I have an error on my splunk with the below description: "The lookup table '*' does not exist or is not available." The lookup name is not mentioned, and the only thing I have is the '*'. Can you please help me with ways to troubleshoot this, so I will be able to know the name of the lookup and try to figure out where it is used?  I have looked both in the _internal and the _audit indexes but couldn't find much. Thanks 
I have a CSV file placed in a UF and the CSV data is as follows '"Name" "userid" "use location" "userdesignation"' Raj raj-123 Argentina Consultant  Now  I have written props and transforms as b... See more...
I have a CSV file placed in a UF and the CSV data is as follows '"Name" "userid" "use location" "userdesignation"' Raj raj-123 Argentina Consultant  Now  I have written props and transforms as below but still the header is being ingested    Props: [Sourcetype] Should_linemerge=false  Line_Breaker=([\r\n]+) NO_BINARY_CHECK=true  CHARSET= UTF-8  INDEXED_EXTRACTIONS=CSV  category=structured  description=Comma-separated value format. Set header and other settings in "Delimited Settings" disabled=false TRUNCATE=99999 DATETIME_CONFIG=CURRENT KV_MODE=none  HEADER_FIELD_LINE_NUMBER=1  TRANSFORMS-set=setnull      Transforms.conf  [setnull] REGEX=(^"NAME".*$) |(^\'\"NAME\".$) DEST_KEY=queue  FORMAT=nullQueue  Please let me know what changes has to be made so that header is not being ingested                                     
Dear Splunk team   I hope everything is well with you   I am writing this post to inform you that I tried to sign up at Splunk [hosam.shafik@lxt.ai ], to download Splunk SOAR community edition, b... See more...
Dear Splunk team   I hope everything is well with you   I am writing this post to inform you that I tried to sign up at Splunk [hosam.shafik@lxt.ai ], to download Splunk SOAR community edition, but I did not receive any download link or verification email, although I registered 3 days ago   Can you please assist me with this problem?
Hello Splunkers, i need help. I have multiline logs looking like:       01/04/22 03:00:00 MONITOR_RAP: blah blah: blah ; blah ; blah ; blah ; blah ; 01/04/22 07:00:00 MONITOR_RAP: blah blah: bl... See more...
Hello Splunkers, i need help. I have multiline logs looking like:       01/04/22 03:00:00 MONITOR_RAP: blah blah: blah ; blah ; blah ; blah ; blah ; 01/04/22 07:00:00 MONITOR_RAP: blah blah: blah ; blah ; blah ; blah ; blah ;         i ingest them with the following sourcetype stanza:       [mysourcetype] SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = true TRUNCATE = 1000 TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 17 TIME_FORMAT = %m/%d/%y %H:%M:%S       The Universal Forwarder monitors the Directory where the logs landing. The first ingestion succeded without problems but when new logs written in the logfile of today, the parsing made multiple events out of the new logentries. The monitor Stanza:       [monitor://<path>/*.log] disabled = 0 sourcetype = mysourcetype index = myindex       So the first couple events were parsed like it should but when new logs arrived splunk made multiple events like (the codeblocks represent one multiline event, each codeblock represents a wrong parsed event in splunk):     01/04/22 03:00:00 blah: blah ; blah ; blah ; blah; blah ;       What is wrong? Is it maybe a bug? I dont get it.
I'm getting a bit confused about onboarding "csv" files. The files are _mostly_ csv - they have a header with field names, they have comma-delimited fields, but they also have a kind of a footer con... See more...
I'm getting a bit confused about onboarding "csv" files. The files are _mostly_ csv - they have a header with field names, they have comma-delimited fields, but they also have a kind of a footer consisting of a line full of dashes followed by a line with "Total: number" in it. With "normal" input I'd just set a normal props/transform on HF which would route those lines into nullqueue and be done with it. I'm not sure though how it works with indexed extractions after reading https://docs.splunk.com/Documentation/Splunk/8.2.4/Data/Extractfieldsfromfileswithstructureddata#Caveats_to_extracting_fields_from_structured_data_files Can I simply do transforms for my sourcetype just as with any other sourcetype? And the other question is - the props.conf that I generated from my stand-alone instance that seems to parse the file properly looks like this: [ mycsv ] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 INDEXED_EXTRACTIONS=csv KV_MODE=none category=Structured disabled=false pulldown_type=true TIME_FORMAT=%s TIMESTAMP_FIELDS=Time HEADER_FIELD_LINE_NUMBER=1  But in the production environment the file will be read by UF, then the data will be sent to HF and then to the indexers. Do I put all those settings into props.conf on UF or HF? Or do I split them between those two? I must admit that this whole indexed extraction thing is tricky and IMHO not described well enough.
Hi I am trying to count the number of jobs till now and want to show the daily trend using timechart command. Not able to get , may be I am messing up with span option for eg.. total jobs executed t... See more...
Hi I am trying to count the number of jobs till now and want to show the daily trend using timechart command. Not able to get , may be I am messing up with span option for eg.. total jobs executed till now is 100 and there is trend of 10 jobs increased today  tomorrow it should show 110 and trend of tomorrows increase job  command - index=.......... projects="*" job_id="*" | dedup job_id | timechart span=60d count In picture you can see that total events are shown 1688 , I need that as single value and daily trend over it      
  Stage(Field name) Capa Capa_india north_Capa checkcapaend NET net_east southNETregion showmeNET us_net   From the field Stage, if the value contains capa 0r Capa... See more...
  Stage(Field name) Capa Capa_india north_Capa checkcapaend NET net_east southNETregion showmeNET us_net   From the field Stage, if the value contains capa 0r Capa I need to color the bar chart as Blue. Otherwise need to show the bar chart color as Orange.   Thanks in advance.
I am running this search to return batch job run times. index=sr_prd sourcetype=batch_roylink earliest=-7d@d | eval s=strptime(Scheduled_Batch_StartTime, "%Y-%m-%d %H:%M:%S.%Q") | eval e=strptime(... See more...
I am running this search to return batch job run times. index=sr_prd sourcetype=batch_roylink earliest=-7d@d | eval s=strptime(Scheduled_Batch_StartTime, "%Y-%m-%d %H:%M:%S.%Q") | eval e=strptime(Scheduled_Batch_Endtime, "%Y-%m-%d %H:%M:%S.%Q") | eval s=round(s,2) | eval e=round(e,2) | eval r=tostring(e-s, "duration") | rename "Scheduled_Batch_StartTime" as "Start Time", "Scheduled_Batch_Endtime" as "End Time", r as "Runtime (H:M:S)" | stats list(s) as "s", list("Start Time") as "Start Time",list("End Time") as "End Time", list("Runtime (H:M:S)") as "Runtime (H:M:S)" by Task_Object | search Task_Object = Roylink_Upload | sort s Even though 's' is a numeric string, the results are not returning in search order -  Any ideas why this is happening?  Thanks
Hi All, I have a .csv file  named Master_List.csv added to splunk lookup. It has the values of the fields "Tech Stack", "Environment", "Region" and "host" and has about 350 values per field. After a... See more...
Hi All, I have a .csv file  named Master_List.csv added to splunk lookup. It has the values of the fields "Tech Stack", "Environment", "Region" and "host" and has about 350 values per field. After adding the lookup table, inputlookup command is working fine and is giving the output table. But when I am using lookup command in the below query, I am not getting the fields in the output on the left-hand side even though all the required permissions have been provided: index=tibco_main sourcetype="NON-DIGITAL_TIBCO_INFRA_FS"  | regex _raw!="^\d+(\.\d+){0,2}\w" | regex _raw!="/apps/tibco/datastore" | rex field=_raw "(?ms)\s(?<Disk_Usage>\d+)%" | rex field=_raw "(?ms)\%\s(?<File_System>\/\w+)" | rex field=_raw "(?P<Time>\w+\s\w+\s\s\d+\s\d+\:\d+\:\d+\s\w+\s\d+)\s\d" | rex field=_raw "(?ms)\d\s(?<Total>\d+(\.\d+){0,2})\w\s\d" | rex field=_raw "(?ms)G\s(?<Used>\d+(\.\d+){0,2})\w\s\d" | lookup Master_List.csv "Environment" Can someone please guide me on how to get the lookup command working or help modify the command.   Thank you..
Hi All, I currently have a dashboard that is used to review batch run times.  It allows the user to use a dropdown to select and view the run times for each task within the batch process.  I have su... See more...
Hi All, I currently have a dashboard that is used to review batch run times.  It allows the user to use a dropdown to select and view the run times for each task within the batch process.  I have subsequently been asked to add the option to view total batch time taken.  To do this requires a different search to that used for the individual batch jobs. I have been able to use saved searches to achieve this. However, the original dashboard dropdown was linked to two searches which used the task name to produce a table and a timechart.   My question is, can this be done with saved searches?  As far as I can see, the dropdown only allows a link to one saved search. As always, any assistance is gratefully received.
Hi, We are using Splunk Cloud.   We have installed symantec endpoint protection version 14.3 RU3 build 5413. We are not using symanted endpoint protection manager. We are using symantec cloud hybri... See more...
Hi, We are using Splunk Cloud.   We have installed symantec endpoint protection version 14.3 RU3 build 5413. We are not using symanted endpoint protection manager. We are using symantec cloud hybrid to manage all SEP clients. Can you please help how can I send symantec endpoint protection client logs from all windows servers to splunk cloud ?  How can I configure data inputs for the same. Sorry I am new to splunk and cannot find any document for symantec endpoint protection to splunk cloud. 
Hello everyone, I'm trying to config SSL to indexer cluster's replication port. I have followed this link to create my SSL cert. https://community.splunk.com/t5/Security/How-do-I-set-up-SSL-forwardi... See more...
Hello everyone, I'm trying to config SSL to indexer cluster's replication port. I have followed this link to create my SSL cert. https://community.splunk.com/t5/Security/How-do-I-set-up-SSL-forwarding-with-new-self-signed-certificates/td-p/57046 But it wasn't working when I config on my 2 indexers. I'm using Splunk 8.2.4 version and my configuration in file server.conf of indexer 01 same like as: [general] serverName = indexer01 pass4SymmKey = $7$HQ2TzhHg23gLrg+/+ScnhxM9sWIunYIUH07h6YVnt48KdK+zxDO75w== [sslConfig] sslPassword = $7$ok1uDkFNGR57BNpNpzjg7wMPWc6uAng9lvIQPj3YX5MZwhccbVOZWw== [replication_port-ssl://8080] acceptFrom = IP's indexer02 rootCA = /opt/splunk/etc/certs/cacert.pem serverCert = /opt/splunk/etc/certs/indexer.pem sslPassword = P@ssw0rd sslCommonNameToCheck = indexer requireClientCert = true So I would like to ask our community to the correct configuration when I want to enable SSL to Replication Port between Indexers Server ? Please help me  Thanks for your concerns !
Hi everyone,  I would like to retrieve all the column names and the field values for each row and put them in an alert, without manually doing it.    Could you let me know if it is possible to... See more...
Hi everyone,  I would like to retrieve all the column names and the field values for each row and put them in an alert, without manually doing it.    Could you let me know if it is possible to iterate through each column name in splunk? My desired output looks like this:  ① [This is for Row labeled ①] journal.status_id.old_value: 90 journal.status_id.new_value: 95 ②[This is for Row labeled ②] journal.assigned_to_id.old_value: 113 journal.assigned_to_id.new_value: 99 ③[This is for Row labeled ③] journal.status_id.old_value: 73 journal.status_id.new_value: 90 journal.assigned_to_id.old_value: null journal.assigned_to_id.new_value: 113 It is possible for other columns to be present so I would like to do it via a loop. 
I was trying to get DaemonSet up and running got below errors while getting pods ready [error]: #0 unexpected error error_class=Errno::EACCES error="Permission denied @ rb_sysopen - /var/log/splunk... See more...
I was trying to get DaemonSet up and running got below errors while getting pods ready [error]: #0 unexpected error error_class=Errno::EACCES error="Permission denied @ rb_sysopen - /var/log/splunk-fluentd-kube-audit.pos" 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/plugin/in_tail.rb:241:in `initialize' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/plugin/in_tail.rb:241:in `open' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/plugin/in_tail.rb:241:in `start' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/root_agent.rb:203:in `block in start' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/root_agent.rb:192:in `block (2 levels) in lifecycle' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/root_agent.rb:191:in `each' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/root_agent.rb:191:in `block in lifecycle' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/root_agent.rb:178:in `each' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/root_agent.rb:178:in `lifecycle' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/root_agent.rb:202:in `start' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/engine.rb:248:in `start' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/engine.rb:147:in `run' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/supervisor.rb:717:in `block in run_worker' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/supervisor.rb:968:in `main_process' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/supervisor.rb:708:in `run_worker' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/command/fluentd.rb:372:in `<top (required)>' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/bin/fluentd:15:in `require' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/bin/fluentd:15:in `<top (required)>' 0 [error]: #0 /usr/bin/fluentd:23:in `load' 0 [error]: #0 /usr/bin/fluentd:23:in `<main>' [error]: #0 unexpected error error_class=Errno::EACCES error="Permission denied @ rb_sysopen - /var/log/splunk-fluentd-kube-audit.pos" 0 [error]: #0 suppressed same stacktrace
Hello, Suppose I've got the following url among lot of others : (logs come from something close to Squid but not indexed properly by Splunk) nav.smartscreen.microsoft.com:443 https://www.franceble... See more...
Hello, Suppose I've got the following url among lot of others : (logs come from something close to Squid but not indexed properly by Splunk) nav.smartscreen.microsoft.com:443 https://www.francebleu.fr/img/antenne.svg http://frplab.com:37566/sdhjkzui1782109zkjeznds http://192.168.120.25:25 https://images.taboola.com/taboola/image/fetch/f_jpg%2Cq_auto%2Ch_175%2Cw_300%2Cc_fill%2Cg_faces:auto%2Ce_sharpen/http%3A%2F%2Fcdn.taboola.com%2Flibtrc%2Fstatic%2Fthumbnails%2Fd46af9fc9a462b0904026156648340b7.jpg I wish I could extract the port number when ther is one. I saw a lot a similar cases on Splunk Answers but the url formating was less varaible than mine. The only way to achieve my aim was to use the following SPL: index=* sourcetype=syslog | rex field=url "(http|https)?[^\:]+\:(?<port>[^\/]+)" | eval monport = if(isint(port), port, 0) | top monport Is there a more elegant way to to ?