All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This part is very important "any that did not match in the last lookup would null out previous matches."
You could put a props.conf and transforms.conf config on the indexer which looks for a regex match, and then puts all matches into the null queue, which removes them. # put this stanza in props.conf... See more...
You could put a props.conf and transforms.conf config on the indexer which looks for a regex match, and then puts all matches into the null queue, which removes them. # put this stanza in props.conf [yoursourcetype] TRANSFORMS-anynamegoeshere=yourtransformname # put this stanza in transforms.conf [yourtransformname] REGEX = <REGEX GOES HERE> DEST_KEY = queue FORMAT = nullQueue I recommend making your regex as specific as possible, and replacing the "yoursourcetype" stanza name so that it applies to specifically the logs you would like to filter.   It may also be a good idea to start with a transform that moves the filtered events to another index, so that you can double-check that it is only moving the logs that you would like to filter out. # Put this in transforms.conf if you would like to move the filtered logs to another index instead of delete them [yourtransformname] REGEX = <REGEX GOES HERE> FORMAT = <dest index name> DEST_KEY = _MetaData:Index  
Are you previewing the upgrade or doing it for real? If so, could you outline the commands that you used, and summarize the results returned by the machine?
In a perfect world I'd find a way to get this into the time picker, but I haven't seen suggestions for that (please warn me if I've missed something). Q:  Is the solution I've found for dealing w... See more...
In a perfect world I'd find a way to get this into the time picker, but I haven't seen suggestions for that (please warn me if I've missed something). Q:  Is the solution I've found for dealing with previous business       day workable or have I missed an edge case that people have      seen before (e.g., it blows up in cron)? Thanks I'm trying to find some way to evaluate a window time during a business week. Goal is having a dashboard w/ drilldown to the previous business day (for comparison to the main graph giving today's data). This means processing last Friday on Monday. The basic question has been asked any number of times but the answers vary in complexity. The simplest approach I could find was using a 3-day window in the time picker and then adding an earliest/latest value via sub-select to limit the data: https://community.splunk.com/t5/Splunk-Search/How-to-to-dynamically-change-earliest-amp-latest-in-subsearch-to/m-p/631220 The approach of: <your index search> [ search index=summary source="summaryName" sourcetype=stash search_name="summaryName field1=* | stats count by _time | streamstats window=2 range(_time) as interval | where interval > 60 * 15 | eval earliest=_time-interval+900, latest=_time | fields earliest latest ] Seems simple enough: Generate an earliest/latest based on the weekday. Applying this to my specific case of business hours during the business week I get this with a case on the weekday from makeresults, which at least seems like a lightweight solution: index="foo" [ | makeresults | eval wkday = strftime( _time, "%a" ) | eval earliest = case( wkday = "Mon", "-3d@d+8h", wkday = "Sun", "-2d@d+8h", wkday = "Sat", "-1d@d+8h", 1=1, "@d+8h" ) | eval latest = case( wkday = "Mon", "-3d@d+17h", wkday = "Sun", "-2d@d+17h", wkday = "Sat", "-1d@d+17h", 1=1, "@d+17h" ) | fields earliest latest ] | stats earliest( _time ) as prior latest( _time ) as after | eval prior = strftime( prior, "%Y.%m.%d %H:%M:%S" ) | eval after = strftime( after, "%Y.%m.%d %H:%M:%S" ) | table prior after And even seems to work: on Sunday the 17th I get: prior                                    after 2024.03.15 08:00:00 2024.03.15 16:59:59 Only question now is whether there is some edge case I've missed (e.g., running via crontab) where the makeresults will generate an offball time or something. Thanks
If you're new to Splunk, I strongly recommend you NOT install multiple instances of Splunk on the same server.  Doing so is a tricky practice that requires more than just separate subdirectories.  Yo... See more...
If you're new to Splunk, I strongly recommend you NOT install multiple instances of Splunk on the same server.  Doing so is a tricky practice that requires more than just separate subdirectories.  You must also ensure each instance uses different ports and that those ports are configured on other instances correctly. If you install instance per server then you can keep the home directory as the default /opt/splunk and avoid the problem you are having. I suspect the problem stems from the home directory assigned to the user running Splunk.  Try changing that directory to /opt/splunk_sh.
I'm currently trying to create a search head cluster for two search head servers while configuring the deployer server. [Environment Description] On Search Head Server 1 (10.10.10.5), there are tw... See more...
I'm currently trying to create a search head cluster for two search head servers while configuring the deployer server. [Environment Description] On Search Head Server 1 (10.10.10.5), there are two Splunk daemons installed as follows: 1) Search Head (SH)    Path: /opt/splunk_sh     // I'm going to designate this daemon as a deployer member. 2) Indexer Cluster Master (CM)    Path: /opt/splunk_cm At this point, the account running each daemon on Search Head Server 1 is 'splunk', which is the same. On Search Head Server 2 (10.10.10.6), there is one Splunk daemon installed: 1) Search Head (SH)    Path: /opt/splunk_sh    // I intend to set this daemon as both a deployer member and a search head captain. Deployer Server (10.10.10.9) 1) Search Head Deploy    Path: /opt/splunk So, with two search head servers and a deployer server in place, when I tried to configure the member settings on Search Head Server 1, I encountered the following error after entering the command: [Command] /opt/splunk_sh/bin/splunk init shcluster-config -auth <admin:adminpw> -mgmt_uri https://10.10.10.5:8089 -replication_port 8080 -replication_factor 2 -conf_deploy_fetch_url https://10.10.10.9:8089 -secret <<pw>> [Command Result] WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Can't create directory "/opt/splunk/.splunk": No such file or directory Please ignore the WARNING as I haven't properly configured the SSL certificate files yet. The problem below is that I'm having difficulty setting the splunk_home path correctly, as indicated by the question title. While searching through community posts, I tried the following but it didn't work out: Attempt 1) Setting /opt/splunk_sh/etc/splunk-launch.conf I've already set SPLUNK_HOME=/opt/splunk_sh in this conf file when installing the two daemons. Now, I'm not sure what to do next. Please help me out.
Workaround? - Use Linux if possible.  
Hi @VK18, the easiest and effective approach for your requirement is to use a Search Head Cluster that replicates all the configurations and app data (as KV-Store) between SHs. If you don't have a ... See more...
Hi @VK18, the easiest and effective approach for your requirement is to use a Search Head Cluster that replicates all the configurations and app data (as KV-Store) between SHs. If you don't have a SH-Cluster (also because you need at least three SHs and a Deployer) so you created a workarounf to align configurations. You have two choices: use an hardware platform as VxRail that automatically cyncronize the two instances (bot from Primary to secondary and from Secondary on primary). Otherwise you have to create some scheduled scripts that copy conf files and KV-Store from one to the other. The firt One runs without problems. Instead, in the second case, put attention to the application of this script, because if the Primary is down and you are using the Secondary, when the Primary will come up and running, you have to copy from Secondary to Primary and not the usual versus, in other words, you have to add many checks to your script before copy execution. As you can understand, yhe manage of this process isn't so easy and sure! For this reason I hint to use a Search Head Cluster that guarantees the complete and correct replication of all the objects. Ciao. Giuseppe 
No, it's via another platform. bacause of security reasons i can't share it, sorry. Do you have any ideas why should it happen?  
Please make sure the inputs.conf file is installed on the UF. Can you search for the UF's internal logs on the SH?  If not, then the UF is not properly connected to the SH/indexer.  Do you have a se... See more...
Please make sure the inputs.conf file is installed on the UF. Can you search for the UF's internal logs on the SH?  If not, then the UF is not properly connected to the SH/indexer.  Do you have a separate indexer? Confirm the UF has read access to the files it is trying to monitor. Please share the query you are using to search for the data on the SH.
It sounds like it is still not happy with your pass4SymmKey. Could you say how many indexers and search heads are you using, whether this problem affects one or all search heads or indexers, and in ... See more...
It sounds like it is still not happy with your pass4SymmKey. Could you say how many indexers and search heads are you using, whether this problem affects one or all search heads or indexers, and in which configuration file you've set the pass4SymmKey?
It sounds like the Splunk App for SOAR would be in the right direction: https://splunkbase.splunk.com/app/6361   If it does not provide the direct dashboard you want, it does provide the data with ... See more...
It sounds like the Splunk App for SOAR would be in the right direction: https://splunkbase.splunk.com/app/6361   If it does not provide the direct dashboard you want, it does provide the data with which you can build dashboards showing e.g. most active or most successful or failed playbooks in SOAR
Excellent, it sounds like it is working with the right IP
How are you sending the query? Via CURL? If so could you post the request you are using?
One way you could do this is by spath-ing until you get a multivalue field containing each of the modifiedProperties json objects, then use mvfilter to filter that field to only the "Group.DisplayNam... See more...
One way you could do this is by spath-ing until you get a multivalue field containing each of the modifiedProperties json objects, then use mvfilter to filter that field to only the "Group.DisplayName" json object, then spath again to get the newValue: | spath input=_raw path=properties.targetResources{}.modifiedProperties{} output=hold | eval hold = mvfilter(like(hold,"%Group.DisplayName%")) | spath input=hold path=newValue output=NewGroupName
Hey Everyone, I would like to build a dashboard or use any pre-defined one in order to collect all the details of the SOAR platform and to present them in a summary report of how many active playboo... See more...
Hey Everyone, I would like to build a dashboard or use any pre-defined one in order to collect all the details of the SOAR platform and to present them in a summary report of how many active playbooks have been run and further information about successful actions and failed activities. Are there any apps that can assist with the creation of such a dashboard or any suggestions on how to do it? i know there is one on SOAR to use, but need to build this on splunk dashboard and not using SOAR itself   thanks, Efi.
I was recently working on Splunk Enterprise security to have a forwarder installed on the Linux machine and display it on the server. While working on this, I noticed that indexer search option is in... See more...
I was recently working on Splunk Enterprise security to have a forwarder installed on the Linux machine and display it on the server. While working on this, I noticed that indexer search option is in red status. So , I went ahead and enabled the suggestion the system was asking. After that th server asked for a restart and now, it won't come up online. Could any one help here please? below is the log when I run Splunk start Done [ OK ] Waiting for web server at https://127.0.0.1:8000 to be available.............. WARNING: web interface does not seem to be available! Further in the file: /opt/splunk/var/log/splunk/splunkd.log This is what I see -  03-17-2024 12:10:19.240 +0000 ERROR ClusteringMgr [33823 MainThread] - pass4SymmKey setting in the clustering or general stanza of server.conf is set to empty or the default value. You must change it to a different value. 03-17-2024 12:10:19.242 +0000 ERROR loader [33823 MainThread] - clustering initialization failed; won't start splunkd I changed the pass4symmkey and it did not help. Could any one help here please?
If you just want the application codes, why are you doing the mstats? | inputlookup app.csv | where Type="error1" | table application_codes
i had that feeing to, but it is still not working, more over , i tried the simplest query that does nor inclde any specail chars and it is still throwing the same error. this also fails:  index=*... See more...
i had that feeing to, but it is still not working, more over , i tried the simplest query that does nor inclde any specail chars and it is still throwing the same error. this also fails:  index=* | eval bytes=10+20 | table id.orig_h, id.resp_h, bytes    
The Github app for Splunk supports both Github and Splunk Cloud, it should be possible to set up the first part of your log path using the documentation for it.   https://splunkbase.splunk.com/app/... See more...
The Github app for Splunk supports both Github and Splunk Cloud, it should be possible to set up the first part of your log path using the documentation for it.   https://splunkbase.splunk.com/app/5596