All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

  Stage(Field name) Capa Capa_india north_Capa checkcapaend NET net_east southNETregion showmeNET us_net   From the field Stage, if the value contains capa 0r Capa... See more...
  Stage(Field name) Capa Capa_india north_Capa checkcapaend NET net_east southNETregion showmeNET us_net   From the field Stage, if the value contains capa 0r Capa I need to color the bar chart as Blue. Otherwise need to show the bar chart color as Orange.   Thanks in advance.
I am running this search to return batch job run times. index=sr_prd sourcetype=batch_roylink earliest=-7d@d | eval s=strptime(Scheduled_Batch_StartTime, "%Y-%m-%d %H:%M:%S.%Q") | eval e=strptime(... See more...
I am running this search to return batch job run times. index=sr_prd sourcetype=batch_roylink earliest=-7d@d | eval s=strptime(Scheduled_Batch_StartTime, "%Y-%m-%d %H:%M:%S.%Q") | eval e=strptime(Scheduled_Batch_Endtime, "%Y-%m-%d %H:%M:%S.%Q") | eval s=round(s,2) | eval e=round(e,2) | eval r=tostring(e-s, "duration") | rename "Scheduled_Batch_StartTime" as "Start Time", "Scheduled_Batch_Endtime" as "End Time", r as "Runtime (H:M:S)" | stats list(s) as "s", list("Start Time") as "Start Time",list("End Time") as "End Time", list("Runtime (H:M:S)") as "Runtime (H:M:S)" by Task_Object | search Task_Object = Roylink_Upload | sort s Even though 's' is a numeric string, the results are not returning in search order -  Any ideas why this is happening?  Thanks
Hi All, I have a .csv file  named Master_List.csv added to splunk lookup. It has the values of the fields "Tech Stack", "Environment", "Region" and "host" and has about 350 values per field. After a... See more...
Hi All, I have a .csv file  named Master_List.csv added to splunk lookup. It has the values of the fields "Tech Stack", "Environment", "Region" and "host" and has about 350 values per field. After adding the lookup table, inputlookup command is working fine and is giving the output table. But when I am using lookup command in the below query, I am not getting the fields in the output on the left-hand side even though all the required permissions have been provided: index=tibco_main sourcetype="NON-DIGITAL_TIBCO_INFRA_FS"  | regex _raw!="^\d+(\.\d+){0,2}\w" | regex _raw!="/apps/tibco/datastore" | rex field=_raw "(?ms)\s(?<Disk_Usage>\d+)%" | rex field=_raw "(?ms)\%\s(?<File_System>\/\w+)" | rex field=_raw "(?P<Time>\w+\s\w+\s\s\d+\s\d+\:\d+\:\d+\s\w+\s\d+)\s\d" | rex field=_raw "(?ms)\d\s(?<Total>\d+(\.\d+){0,2})\w\s\d" | rex field=_raw "(?ms)G\s(?<Used>\d+(\.\d+){0,2})\w\s\d" | lookup Master_List.csv "Environment" Can someone please guide me on how to get the lookup command working or help modify the command.   Thank you..
Hi All, I currently have a dashboard that is used to review batch run times.  It allows the user to use a dropdown to select and view the run times for each task within the batch process.  I have su... See more...
Hi All, I currently have a dashboard that is used to review batch run times.  It allows the user to use a dropdown to select and view the run times for each task within the batch process.  I have subsequently been asked to add the option to view total batch time taken.  To do this requires a different search to that used for the individual batch jobs. I have been able to use saved searches to achieve this. However, the original dashboard dropdown was linked to two searches which used the task name to produce a table and a timechart.   My question is, can this be done with saved searches?  As far as I can see, the dropdown only allows a link to one saved search. As always, any assistance is gratefully received.
Hi, We are using Splunk Cloud.   We have installed symantec endpoint protection version 14.3 RU3 build 5413. We are not using symanted endpoint protection manager. We are using symantec cloud hybri... See more...
Hi, We are using Splunk Cloud.   We have installed symantec endpoint protection version 14.3 RU3 build 5413. We are not using symanted endpoint protection manager. We are using symantec cloud hybrid to manage all SEP clients. Can you please help how can I send symantec endpoint protection client logs from all windows servers to splunk cloud ?  How can I configure data inputs for the same. Sorry I am new to splunk and cannot find any document for symantec endpoint protection to splunk cloud. 
Hello everyone, I'm trying to config SSL to indexer cluster's replication port. I have followed this link to create my SSL cert. https://community.splunk.com/t5/Security/How-do-I-set-up-SSL-forwardi... See more...
Hello everyone, I'm trying to config SSL to indexer cluster's replication port. I have followed this link to create my SSL cert. https://community.splunk.com/t5/Security/How-do-I-set-up-SSL-forwarding-with-new-self-signed-certificates/td-p/57046 But it wasn't working when I config on my 2 indexers. I'm using Splunk 8.2.4 version and my configuration in file server.conf of indexer 01 same like as: [general] serverName = indexer01 pass4SymmKey = $7$HQ2TzhHg23gLrg+/+ScnhxM9sWIunYIUH07h6YVnt48KdK+zxDO75w== [sslConfig] sslPassword = $7$ok1uDkFNGR57BNpNpzjg7wMPWc6uAng9lvIQPj3YX5MZwhccbVOZWw== [replication_port-ssl://8080] acceptFrom = IP's indexer02 rootCA = /opt/splunk/etc/certs/cacert.pem serverCert = /opt/splunk/etc/certs/indexer.pem sslPassword = P@ssw0rd sslCommonNameToCheck = indexer requireClientCert = true So I would like to ask our community to the correct configuration when I want to enable SSL to Replication Port between Indexers Server ? Please help me  Thanks for your concerns !
Hi everyone,  I would like to retrieve all the column names and the field values for each row and put them in an alert, without manually doing it.    Could you let me know if it is possible to... See more...
Hi everyone,  I would like to retrieve all the column names and the field values for each row and put them in an alert, without manually doing it.    Could you let me know if it is possible to iterate through each column name in splunk? My desired output looks like this:  ① [This is for Row labeled ①] journal.status_id.old_value: 90 journal.status_id.new_value: 95 ②[This is for Row labeled ②] journal.assigned_to_id.old_value: 113 journal.assigned_to_id.new_value: 99 ③[This is for Row labeled ③] journal.status_id.old_value: 73 journal.status_id.new_value: 90 journal.assigned_to_id.old_value: null journal.assigned_to_id.new_value: 113 It is possible for other columns to be present so I would like to do it via a loop. 
I was trying to get DaemonSet up and running got below errors while getting pods ready [error]: #0 unexpected error error_class=Errno::EACCES error="Permission denied @ rb_sysopen - /var/log/splunk... See more...
I was trying to get DaemonSet up and running got below errors while getting pods ready [error]: #0 unexpected error error_class=Errno::EACCES error="Permission denied @ rb_sysopen - /var/log/splunk-fluentd-kube-audit.pos" 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/plugin/in_tail.rb:241:in `initialize' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/plugin/in_tail.rb:241:in `open' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/plugin/in_tail.rb:241:in `start' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/root_agent.rb:203:in `block in start' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/root_agent.rb:192:in `block (2 levels) in lifecycle' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/root_agent.rb:191:in `each' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/root_agent.rb:191:in `block in lifecycle' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/root_agent.rb:178:in `each' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/root_agent.rb:178:in `lifecycle' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/root_agent.rb:202:in `start' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/engine.rb:248:in `start' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/engine.rb:147:in `run' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/supervisor.rb:717:in `block in run_worker' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/supervisor.rb:968:in `main_process' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/supervisor.rb:708:in `run_worker' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/lib/fluent/command/fluentd.rb:372:in `<top (required)>' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/bin/fluentd:15:in `require' 0 [error]: #0 /usr/share/gems/gems/fluentd-1.14.2/bin/fluentd:15:in `<top (required)>' 0 [error]: #0 /usr/bin/fluentd:23:in `load' 0 [error]: #0 /usr/bin/fluentd:23:in `<main>' [error]: #0 unexpected error error_class=Errno::EACCES error="Permission denied @ rb_sysopen - /var/log/splunk-fluentd-kube-audit.pos" 0 [error]: #0 suppressed same stacktrace
Hello, Suppose I've got the following url among lot of others : (logs come from something close to Squid but not indexed properly by Splunk) nav.smartscreen.microsoft.com:443 https://www.franceble... See more...
Hello, Suppose I've got the following url among lot of others : (logs come from something close to Squid but not indexed properly by Splunk) nav.smartscreen.microsoft.com:443 https://www.francebleu.fr/img/antenne.svg http://frplab.com:37566/sdhjkzui1782109zkjeznds http://192.168.120.25:25 https://images.taboola.com/taboola/image/fetch/f_jpg%2Cq_auto%2Ch_175%2Cw_300%2Cc_fill%2Cg_faces:auto%2Ce_sharpen/http%3A%2F%2Fcdn.taboola.com%2Flibtrc%2Fstatic%2Fthumbnails%2Fd46af9fc9a462b0904026156648340b7.jpg I wish I could extract the port number when ther is one. I saw a lot a similar cases on Splunk Answers but the url formating was less varaible than mine. The only way to achieve my aim was to use the following SPL: index=* sourcetype=syslog | rex field=url "(http|https)?[^\:]+\:(?<port>[^\/]+)" | eval monport = if(isint(port), port, 0) | top monport Is there a more elegant way to to ?
Hi, I have created a single value and statistical table panel using the below base search , base search : <search id="search1"> <query>index=s (sourcetype=S_Crd OR sourcetype=S_Fire) | fields *</... See more...
Hi, I have created a single value and statistical table panel using the below base search , base search : <search id="search1"> <query>index=s (sourcetype=S_Crd OR sourcetype=S_Fire) | fields *</query> <earliest>-24h@h</earliest> <latest>now</latest> </search>   In search: <single> <search base="search1"> <query> | rex field=_raw "Fire=(?&lt;FireEye&gt;.*?)," | rex mode=sed field=Fire "s/\\\"//g" | stats values(*) as * values(sourcetype) as sourcetype by sysid | fillnull value="" |evalOS=case(like(OS,"%Windows%"),"Windows",like(OS,"%Linux%"),"Linux",like(OS,"%Missing%"),"Others",like(OS,"%Solaris%"),"Solaris",like(OS,"%AIX%"),"AIX",1=1,"Others") |search $os$  |stats count</query> </search> sometime I am getting correct values but suddenly it displays 0 in all panels including this.After giving ctrl + F5 ,the issue gets resolved .May i know the reason for this and how to resolve this in dashboard.      
Hello, I have configure splunk forwarder to send logs to splunk on 6 servers.logs are psuhing to the splunk for sometimes.but for it gets stop for some hours and again it gets restarted after some h... See more...
Hello, I have configure splunk forwarder to send logs to splunk on 6 servers.logs are psuhing to the splunk for sometimes.but for it gets stop for some hours and again it gets restarted after some hours.   can someone help to gets the exact issue here? input.conf [monitor:///var/log/application/*.log] sourcetype = app-us-west index = us_west disabled = false recursive = true output.conf indexAndForward] index = false [tcpout] defaultGroup = default forwardedindex.filter.disable = true indexAndForward = false [tcpout:default] autoLB = true autoLBFrequency = 30 forceTimebasedAutoLB = true server = splunk-fwd-:9997 useACK = true limits.conf maxKBps = 0
Hi, I have a list of events span across more than a year, the event will contain type of card, transaction status. I want to have a table with a drop down box for user to choose month and count the e... See more...
Hi, I have a list of events span across more than a year, the event will contain type of card, transaction status. I want to have a table with a drop down box for user to choose month and count the event by month, the month before, status, type of card, and finally caculate the rate between them. For example, if the users  choose April, then MONTH-1 will be March, and the table will br like this:     CARD|STATUS|MONTH|MONTH-1|RATE VISA|1 |3 |6 |100% VISA|0 |8 |4 |50% MC |99 |5 |9 |90%     I then encounter 2 problem: 1. I try to test out by simple display them all by using stats     index=index |stats count by date_month date_year STATUS CARD     but it don't display [CARD|STATUS|date_month|count] like I thought it would be, it blank, it still show if I only use date_month or don't use it at all. 2. I don't know how to stats count by in two seperate months, I could display them all and then search by using token, but then I won't br able to show the month before side by side and then caculate them. Then there's also problem with different year, and 01/2022 and 12/2021. If anyone know the solution for these problems I'll be very appriciate. Thank you in advance.    
Hi All, I haven't been able to find an answer on here that has fixed my problem. Yes, I have followed all of the instructions on the Github and I have tried on a Windows10 VM and also on my home la... See more...
Hi All, I haven't been able to find an answer on here that has fixed my problem. Yes, I have followed all of the instructions on the Github and I have tried on a Windows10 VM and also on my home lab. It's been 8 hours of troubleshooting and I am not able to get my SPLUNK to recognize the data set. I have put this data into my SPLUNK > ETC/APPS and several other locations, to try to have the instance ingest the data - to no avail.  PLEASE HELP! I just want to learn and it's impeding my progress! Even though this is also a learning process Installation Download the dataset file indicated above and check the MD5 hash to ensure integrity. Install Splunk Enterprise and the apps/add-ons listed in the Required Software section below. It is important to match the specific version of each app and add-on. Unzip/untar the downloaded file into $SPLUNK_HOME/etc/apps Restart Splunk The BOTS v3 data will be available by searching: index=botsv3 earliest=0 Note that because the data is distributed in a pre-indexed format, there are no volume-based licensing limits to be concerned with.
Recently upgraded to SOAR 5.0.1from Phantom 4.10 and I'm having some difficulty finding the old "API" actions that can do things like: Available APIs set label set sensitivity set severity set ... See more...
Recently upgraded to SOAR 5.0.1from Phantom 4.10 and I'm having some difficulty finding the old "API" actions that can do things like: Available APIs set label set sensitivity set severity set status set owner add list remove list pin add tag remove tag add comment add note promote to case In the new visual editor there is an option for adding "actions" but the API isn't listed in there. It only lists actions from my configured apps... How can we "set status" of a container in the new Visual Editor?
I've set Users/Preferences/Time Zone = GMT. And then I run some SPL with ... | timechart count span=24h _time displayed in browser as YYYY-mm-dd. Download data as CSV.  Within the CSV _time is sho... See more...
I've set Users/Preferences/Time Zone = GMT. And then I run some SPL with ... | timechart count span=24h _time displayed in browser as YYYY-mm-dd. Download data as CSV.  Within the CSV _time is shown with 5 hour offset, not GMT 2021-12-08T00:00:00.000-0500 Why is the time zone preference setting not observed? Ironically, my OS time is CST -6 hours, so not sure where the -5 is coming from.  Spunk Enterprise 8.1.4 client Win 10 with Edge Version 96.0.1054.57 if that matters. Thanks
Hi, I checked Splunkbase for an integration with an intel feed reader we use, Obstract (https://www.obstracts.com/), but was unable to find anything. They offer a TAXII feed (version 2.1) but I don'... See more...
Hi, I checked Splunkbase for an integration with an intel feed reader we use, Obstract (https://www.obstracts.com/), but was unable to find anything. They offer a TAXII feed (version 2.1) but I don't think this is supported by ES (this link says only 1.x supported: https://docs.splunk.com/Documentation/ES/latest/RN/Enhancements)? Can anyone confirm? Of this is the case, is anyone else using Obstracts with Splunk ES?
Dear all, best wishes for 2022. Is it possible to use rtrim to remove all characters out of a search result that come after a specific character? For example, using a FQDN, is it possible to use rtr... See more...
Dear all, best wishes for 2022. Is it possible to use rtrim to remove all characters out of a search result that come after a specific character? For example, using a FQDN, is it possible to use rtrim to remove every character after the host name (so after the dot)? Original output: server1.domain.com Desired output: server1 I am aware that regex can solve this, but I am looking for alternative options to solve this problem. This solution should ideally be working for any combination of servers and domain names. Any help is welcome.
Hi,  I have a table like that : test state_A state_B state_C 1 ok ko- WARN ko - ERROR 2 ko- WARN ok ok 3 ok ok ok   I would like to create a field "global_state" with "... See more...
Hi,  I have a table like that : test state_A state_B state_C 1 ok ko- WARN ko - ERROR 2 ko- WARN ok ok 3 ok ok ok   I would like to create a field "global_state" with "done" value if all fields state_* value are "OK" , if not write "issue": test state_A state_B state_C global_state 1 ok ko- WARN ko - ERROR issue 2 ko- WARN ok ok issue 3 ok ok ok done I tried this foreach but not working : | foreach state_*  [ eval global_state= if(<<FIELD>>=="ko- WARN" OR <<FIELD>>=="ko - ERROR", "issue", "done") ] The second condition in the if is not applied.  Can you help me please?
I've been investigating why I started to not receive  ES events for some time now. After upgrading ES, I had to reinstall a lot of the apps that were previously installed & configured. One of the thi... See more...
I've been investigating why I started to not receive  ES events for some time now. After upgrading ES, I had to reinstall a lot of the apps that were previously installed & configured. One of the things I have not been able to resolve is how to get ES to detect "Geographically Improbable Access Detected" again.  My Authentication Datamodel is receiving events again.  My asset_lookup_by_str has events However, my asset_lookup_by_cidr does not return results. So I believe this may be causing it. How can I get the asset_lookup_by_cidr to populate again?
Greetings, Where can I disable the default Bucket Copy Trigger search to prevent jar files from returning in Splunk? Also, which splunk instance does this search need to be disabled? Please see belo... See more...
Greetings, Where can I disable the default Bucket Copy Trigger search to prevent jar files from returning in Splunk? Also, which splunk instance does this search need to be disabled? Please see below:  "Jar files matching the same filename of the files found in the directories above, but found in other directories on your Splunk instances are likely from normal Splunk operation (e.g. search head bundle replication) and can be safely deleted. If any jar files return in the splunk_archiver app, disabling the default Bucket Copy Trigger search in that app will stop this behavior from happening. "  My Splunk architecture (airgapped) includes the following:  1 Search Head  1 Heavy Forward 1 Deployment Server 1 Cluster Master/License Master (operating as the same instance) 7 Indexers (all clustered) Within my distributed environment, just want to know where to disable this search to prevent this from happening again.  Thank you.  -KB