All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have created a single value and statistical table panel using the below base search , base search : <search id="search1"> <query>index=s (sourcetype=S_Crd OR sourcetype=S_Fire) | fields *</... See more...
Hi, I have created a single value and statistical table panel using the below base search , base search : <search id="search1"> <query>index=s (sourcetype=S_Crd OR sourcetype=S_Fire) | fields *</query> <earliest>-24h@h</earliest> <latest>now</latest> </search>   In search: <single> <search base="search1"> <query> | rex field=_raw "Fire=(?&lt;FireEye&gt;.*?)," | rex mode=sed field=Fire "s/\\\"//g" | stats values(*) as * values(sourcetype) as sourcetype by sysid | fillnull value="" |evalOS=case(like(OS,"%Windows%"),"Windows",like(OS,"%Linux%"),"Linux",like(OS,"%Missing%"),"Others",like(OS,"%Solaris%"),"Solaris",like(OS,"%AIX%"),"AIX",1=1,"Others") |search $os$  |stats count</query> </search> sometime I am getting correct values but suddenly it displays 0 in all panels including this.After giving ctrl + F5 ,the issue gets resolved .May i know the reason for this and how to resolve this in dashboard.      
Hello, I have configure splunk forwarder to send logs to splunk on 6 servers.logs are psuhing to the splunk for sometimes.but for it gets stop for some hours and again it gets restarted after some h... See more...
Hello, I have configure splunk forwarder to send logs to splunk on 6 servers.logs are psuhing to the splunk for sometimes.but for it gets stop for some hours and again it gets restarted after some hours.   can someone help to gets the exact issue here? input.conf [monitor:///var/log/application/*.log] sourcetype = app-us-west index = us_west disabled = false recursive = true output.conf indexAndForward] index = false [tcpout] defaultGroup = default forwardedindex.filter.disable = true indexAndForward = false [tcpout:default] autoLB = true autoLBFrequency = 30 forceTimebasedAutoLB = true server = splunk-fwd-:9997 useACK = true limits.conf maxKBps = 0
Hi, I have a list of events span across more than a year, the event will contain type of card, transaction status. I want to have a table with a drop down box for user to choose month and count the e... See more...
Hi, I have a list of events span across more than a year, the event will contain type of card, transaction status. I want to have a table with a drop down box for user to choose month and count the event by month, the month before, status, type of card, and finally caculate the rate between them. For example, if the users  choose April, then MONTH-1 will be March, and the table will br like this:     CARD|STATUS|MONTH|MONTH-1|RATE VISA|1 |3 |6 |100% VISA|0 |8 |4 |50% MC |99 |5 |9 |90%     I then encounter 2 problem: 1. I try to test out by simple display them all by using stats     index=index |stats count by date_month date_year STATUS CARD     but it don't display [CARD|STATUS|date_month|count] like I thought it would be, it blank, it still show if I only use date_month or don't use it at all. 2. I don't know how to stats count by in two seperate months, I could display them all and then search by using token, but then I won't br able to show the month before side by side and then caculate them. Then there's also problem with different year, and 01/2022 and 12/2021. If anyone know the solution for these problems I'll be very appriciate. Thank you in advance.    
Hi All, I haven't been able to find an answer on here that has fixed my problem. Yes, I have followed all of the instructions on the Github and I have tried on a Windows10 VM and also on my home la... See more...
Hi All, I haven't been able to find an answer on here that has fixed my problem. Yes, I have followed all of the instructions on the Github and I have tried on a Windows10 VM and also on my home lab. It's been 8 hours of troubleshooting and I am not able to get my SPLUNK to recognize the data set. I have put this data into my SPLUNK > ETC/APPS and several other locations, to try to have the instance ingest the data - to no avail.  PLEASE HELP! I just want to learn and it's impeding my progress! Even though this is also a learning process Installation Download the dataset file indicated above and check the MD5 hash to ensure integrity. Install Splunk Enterprise and the apps/add-ons listed in the Required Software section below. It is important to match the specific version of each app and add-on. Unzip/untar the downloaded file into $SPLUNK_HOME/etc/apps Restart Splunk The BOTS v3 data will be available by searching: index=botsv3 earliest=0 Note that because the data is distributed in a pre-indexed format, there are no volume-based licensing limits to be concerned with.
Recently upgraded to SOAR 5.0.1from Phantom 4.10 and I'm having some difficulty finding the old "API" actions that can do things like: Available APIs set label set sensitivity set severity set ... See more...
Recently upgraded to SOAR 5.0.1from Phantom 4.10 and I'm having some difficulty finding the old "API" actions that can do things like: Available APIs set label set sensitivity set severity set status set owner add list remove list pin add tag remove tag add comment add note promote to case In the new visual editor there is an option for adding "actions" but the API isn't listed in there. It only lists actions from my configured apps... How can we "set status" of a container in the new Visual Editor?
I've set Users/Preferences/Time Zone = GMT. And then I run some SPL with ... | timechart count span=24h _time displayed in browser as YYYY-mm-dd. Download data as CSV.  Within the CSV _time is sho... See more...
I've set Users/Preferences/Time Zone = GMT. And then I run some SPL with ... | timechart count span=24h _time displayed in browser as YYYY-mm-dd. Download data as CSV.  Within the CSV _time is shown with 5 hour offset, not GMT 2021-12-08T00:00:00.000-0500 Why is the time zone preference setting not observed? Ironically, my OS time is CST -6 hours, so not sure where the -5 is coming from.  Spunk Enterprise 8.1.4 client Win 10 with Edge Version 96.0.1054.57 if that matters. Thanks
Hi, I checked Splunkbase for an integration with an intel feed reader we use, Obstract (https://www.obstracts.com/), but was unable to find anything. They offer a TAXII feed (version 2.1) but I don'... See more...
Hi, I checked Splunkbase for an integration with an intel feed reader we use, Obstract (https://www.obstracts.com/), but was unable to find anything. They offer a TAXII feed (version 2.1) but I don't think this is supported by ES (this link says only 1.x supported: https://docs.splunk.com/Documentation/ES/latest/RN/Enhancements)? Can anyone confirm? Of this is the case, is anyone else using Obstracts with Splunk ES?
Dear all, best wishes for 2022. Is it possible to use rtrim to remove all characters out of a search result that come after a specific character? For example, using a FQDN, is it possible to use rtr... See more...
Dear all, best wishes for 2022. Is it possible to use rtrim to remove all characters out of a search result that come after a specific character? For example, using a FQDN, is it possible to use rtrim to remove every character after the host name (so after the dot)? Original output: server1.domain.com Desired output: server1 I am aware that regex can solve this, but I am looking for alternative options to solve this problem. This solution should ideally be working for any combination of servers and domain names. Any help is welcome.
Hi,  I have a table like that : test state_A state_B state_C 1 ok ko- WARN ko - ERROR 2 ko- WARN ok ok 3 ok ok ok   I would like to create a field "global_state" with "... See more...
Hi,  I have a table like that : test state_A state_B state_C 1 ok ko- WARN ko - ERROR 2 ko- WARN ok ok 3 ok ok ok   I would like to create a field "global_state" with "done" value if all fields state_* value are "OK" , if not write "issue": test state_A state_B state_C global_state 1 ok ko- WARN ko - ERROR issue 2 ko- WARN ok ok issue 3 ok ok ok done I tried this foreach but not working : | foreach state_*  [ eval global_state= if(<<FIELD>>=="ko- WARN" OR <<FIELD>>=="ko - ERROR", "issue", "done") ] The second condition in the if is not applied.  Can you help me please?
I've been investigating why I started to not receive  ES events for some time now. After upgrading ES, I had to reinstall a lot of the apps that were previously installed & configured. One of the thi... See more...
I've been investigating why I started to not receive  ES events for some time now. After upgrading ES, I had to reinstall a lot of the apps that were previously installed & configured. One of the things I have not been able to resolve is how to get ES to detect "Geographically Improbable Access Detected" again.  My Authentication Datamodel is receiving events again.  My asset_lookup_by_str has events However, my asset_lookup_by_cidr does not return results. So I believe this may be causing it. How can I get the asset_lookup_by_cidr to populate again?
Greetings, Where can I disable the default Bucket Copy Trigger search to prevent jar files from returning in Splunk? Also, which splunk instance does this search need to be disabled? Please see belo... See more...
Greetings, Where can I disable the default Bucket Copy Trigger search to prevent jar files from returning in Splunk? Also, which splunk instance does this search need to be disabled? Please see below:  "Jar files matching the same filename of the files found in the directories above, but found in other directories on your Splunk instances are likely from normal Splunk operation (e.g. search head bundle replication) and can be safely deleted. If any jar files return in the splunk_archiver app, disabling the default Bucket Copy Trigger search in that app will stop this behavior from happening. "  My Splunk architecture (airgapped) includes the following:  1 Search Head  1 Heavy Forward 1 Deployment Server 1 Cluster Master/License Master (operating as the same instance) 7 Indexers (all clustered) Within my distributed environment, just want to know where to disable this search to prevent this from happening again.  Thank you.  -KB 
| savedsearch cbp_inc_base | eval _time=strftime(opened_time, "%Y/%m/%d") | | bin _time span=1d   here _ time is giving complete data, i want to filter it for one month i.e.. 30days. I tried r... See more...
| savedsearch cbp_inc_base | eval _time=strftime(opened_time, "%Y/%m/%d") | | bin _time span=1d   here _ time is giving complete data, i want to filter it for one month i.e.. 30days. I tried relative_time, but its giving only for specific day
I have syslog-pushed events which behave... weirdly around the end of the year. As we all know, there might be some delay between the source emitting an event and the HF receiving it (add to it poss... See more...
I have syslog-pushed events which behave... weirdly around the end of the year. As we all know, there might be some delay between the source emitting an event and the HF receiving it (add to it possible slight clocks drift and time needed for passing the event between various stages of my syslog environment - it can accumulate to several seconds). So if I have an event which is being sent with a timestamp slightly before midnight on Dec 31st (from the source point of view) and gets received by HF just after midnight on Jan 1st and the event itself doesn't contain the year we get a very uncomofrtable situation. The parts of the date that the input can make out from the event, it fills ok. So we do get the "Dec 31st" part. But since there is no year in the syslog header HF fills the year part from the current (again - from his point of view) date. So I end up getting events indexed at "Dec 31st 2022". Is there some setting I'm missing that could prevent that? (can't modify the timestamp format at source).
In my dashboard studio I want to use dynamic dropdowns. The data source is a metric search like | mcatalog values("extracted_host") AS devices WHERE index=spec_metric extracted_host="*PI*" | sort d... See more...
In my dashboard studio I want to use dynamic dropdowns. The data source is a metric search like | mcatalog values("extracted_host") AS devices WHERE index=spec_metric extracted_host="*PI*" | sort devices | table devices In normal search I get all devices, but in dashboard my dropdown is empty. Is the problem of statistical data? Has anybody a solution? Thanks in advance.
Error :    java.lang.RuntimeException: java.lang.RuntimeException: java.sql.SQLException: Missing defines  
We need to run a profiler on Windows IIS. Need to track PHP, MySQL and Redis. Could you please check and provide me the more info please. If it is suitable will go with premium plan.
Hi Community, Is there a way to get specific data from your log strings and put them in tabular format? We have logs like activity xxxx failed for account yyyy and for user zzzz So we need data xx... See more...
Hi Community, Is there a way to get specific data from your log strings and put them in tabular format? We have logs like activity xxxx failed for account yyyy and for user zzzz So we need data xxxx, yyyy and zzzz as search data in tabular format for our alerts. Any help is appreciated..!   Thanking you in anticipation..!   
Hello guys, Splunk newbie here.   Hope someone can assist in my case,  so index=*_whatever is expected to be filled with data in monthly basis, I want to create a dashboard that tracks whether whi... See more...
Hello guys, Splunk newbie here.   Hope someone can assist in my case,  so index=*_whatever is expected to be filled with data in monthly basis, I want to create a dashboard that tracks whether which indexes are filled and which that are not so I can keep track and check which ones are empty and which ones are filled. Thank you!!
Hello ! Can someone tell methe difference between   /services    and   /servicesNS    when using the Splunk REST API please ?   
I noticed in our environment, from many uf, the internal logs were indexed under a different index name. After investigation, I find it's related to some settings in transforms.conf. So in transform... See more...
I noticed in our environment, from many uf, the internal logs were indexed under a different index name. After investigation, I find it's related to some settings in transforms.conf. So in transforms.conf, it's like: [test_windows_index] REGEX =.* DEST_KEY = _MetaData:Index FORMAT = rexall_windows in props.conf, for certain hosts, there're settings like: [host::testserver1] TRANSFORMS-Microsoft_AD_1 = test_windows_index, Routing_testCloud I believe I should try to exclude indexes like "_internal, _audit", so I changed REGEX=.* to  REGEX=[a-zA-Z0-9]+ but it doesn't seem to work. Appreciate if somebody here can help or provide suggestions.