All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi I’m not sure how it has done on SCP? Probably it also depends on which experience you have on cloud or not. Maybe some splunkers can open it? Anyhow you can do it both way in onprem. You can ter... See more...
Hi I’m not sure how it has done on SCP? Probably it also depends on which experience you have on cloud or not. Maybe some splunkers can open it? Anyhow you can do it both way in onprem. You can terminate HEC to separate HFs behind of LB or point LB to indexers. Personally I prefer to use separate HFs as that combination disturbs less indexers and searches. You must remember that when you e.g. install new props&transforms.conf to manage HEC inputs this means that those nodes will rebooted! Here is link to SVA documentation where you can read more https://docs.splunk.com/Documentation/SVA/current/Architectures/About r. Ismo
I'm under the impression that HEC ingestion directly to the indexers is supported natively on cloud. I wonder whether the HEC ingestion on-prem to the indexers is supported in the same way?
We have a case in which each email message is stored on the file system as a distinct file. Is there a way to ingest each file as a distinct Splunk event? I assume that UF is the right way but I m... See more...
We have a case in which each email message is stored on the file system as a distinct file. Is there a way to ingest each file as a distinct Splunk event? I assume that UF is the right way but I might be wrong. 
It’s possible to do almost seamless migration for end users as I told in previous post. But it needs some manual work for admins and of course enough documentation for both.
Hello,  I need your help with timepicker values. I'd like to be able to keep some and hide others. I would like to hide all times linked to real time : "today,..." In the predefined periods : Ye... See more...
Hello,  I need your help with timepicker values. I'd like to be able to keep some and hide others. I would like to hide all times linked to real time : "today,..." In the predefined periods : Yesterday Since the start of the week Since the beginning of the working week From the beginning of the month Year to date Yesterday Last week The previous working week The previous month The previous year Last 7 days Last 30 days Other Anytime I would to have also : Period defined by a date Date and period Advanced  
hi @Elbald97  In 9.4.0 KV store has been upgarded from 4.2 to 7.x, as this is a major change for KV store as per your deatils upgrade is not working in Indexers. try following link for preup... See more...
hi @Elbald97  In 9.4.0 KV store has been upgarded from 4.2 to 7.x, as this is a major change for KV store as per your deatils upgrade is not working in Indexers. try following link for preupgrade checks for  KV store upgarde  https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/MigrateKVstore#Prerequisites if these are indexers , KV store is not needed on them. you can disable using following on  indexers  server.conf  [kvstore] disabled=true  to aviod kv store upgarde aslong with Splunk 9.4.0 upgarde  kindly set kvstoreUpgradeOnStartupEnabled=false  post splunk upgarde you can upgarde KV store  if these are searchheads then do not disable KV store kindly refer to following documentation for mor details  https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/MigrateKVstore  https://docs.splunk.com/Documentation/Splunk/9.4.0/Installation/AboutupgradingREADTHISFIRST#Key_points_for_upgrading_to_version_9.4 
One option could be use of separate data collector agent like filebeat instead of try to duplicate data somewhere in middle of path to splunk? But as said you should use some person to look and unders... See more...
One option could be use of separate data collector agent like filebeat instead of try to duplicate data somewhere in middle of path to splunk? But as said you should use some person to look and understand your situation and then make the plan, what is best for you.
Awesome, thank you very much, that did the trick.  I screwed up a little, after I tested it, I realized that I was wrong, the originating field can be like one of the following:   alert.alias = STO... See more...
Awesome, thank you very much, that did the trick.  I screwed up a little, after I tested it, I realized that I was wrong, the originating field can be like one of the following:   alert.alias = STORE_8102_BOXONE_MX_8102 alert.alias = STORE_8102_BOXONE_MX_8102_01 Is there a regex for the second field that would just capture everything after that third "_"? Thanks again, really appreciate the help, Tom  
I am using Windows 10 and the Splunk Universal Forwarder version 9.4.0. When I run certain Splunk commands from an Admin Command Prompt, the command window freezes with a blinking cursor and fails to... See more...
I am using Windows 10 and the Splunk Universal Forwarder version 9.4.0. When I run certain Splunk commands from an Admin Command Prompt, the command window freezes with a blinking cursor and fails to execute. I have to use Ctrl+C to stop the command. Some commands work without issues, such as: > splunk status – which confirms that Splunk is running, and > splunk version – which displays the version number. However, other commands, like: > splunk list forward-servers or > splunk display local-index, do not return any results. Instead, the cursor just blinks indefinitely. Has anyone experienced this issue before or found a solution?
Hi at all, I have a data structure like the following:     title1 title2 title3 title4 value     and I need to group by title1 and having title4 where value (numeric field) is max. How can I ... See more...
Hi at all, I have a data structure like the following:     title1 title2 title3 title4 value     and I need to group by title1 and having title4 where value (numeric field) is max. How can I use eval in stats to have this? something like this:     | stats values(eval(title4 where value is max)) AS title4 BY title1     How can I do it? Ciao. Giuseppe
Hi @tdavison76 , if the structure of your field is always the same: field1=chars_numbers_chars separator=_ field2=chars_numbers_numbers you can use a regex like the following: | rex field=alert... See more...
Hi @tdavison76 , if the structure of your field is always the same: field1=chars_numbers_chars separator=_ field2=chars_numbers_numbers you can use a regex like the following: | rex field=alert.alias "^(?<field1>\w+_\d+_\w+)_(?<field2>\w+_\d+_\d+)" Ciao. Giuseppe
Hello,  I am just trying to do a regex to split a single field into two new fields. The original field is: alert.alias = STORE_176_RSO_AP_176_10 I need to split this out to 2 new fields. First fi... See more...
Hello,  I am just trying to do a regex to split a single field into two new fields. The original field is: alert.alias = STORE_176_RSO_AP_176_10 I need to split this out to 2 new fields. First field = STORE_176_RSO Second field = AP_176_10 I am horrific at regex and am not sure how I can pull this off.  Any help would be awesome.   Thank you for your help, Tom
By default the Splunk server receiving HEC is set to only log INFO and above.  If you have a very limited number of receiving end points you can temporarily increase to DEBUG and above for logging.  ... See more...
By default the Splunk server receiving HEC is set to only log INFO and above.  If you have a very limited number of receiving end points you can temporarily increase to DEBUG and above for logging.  If you have a small number of HF's or IDX tier then this is feasible, if you have a large IDX tier then it's not so easy. The debug will be specifically helpful in identifying the source of bad connection attempts.  I don't recall the token being visible and since any invalid token has no categorization with internal input configurations some real advanced answers are unlikely.  I also don't recall any capability to receive and process data without a valid token as it would create data poisoning issues along with license capacity issues to do so. UPDATE Sorry, it just dawned on me this was for OTEL not Splunk receiving. 
Hello, I am following document: https://docs.splunk.com/Documentation/Splunk/9.4.0/Security/ConfigureandinstallcertificatesforLogObserver?ref=hk to configure and install certificates in Splunk Enter... See more...
Hello, I am following document: https://docs.splunk.com/Documentation/Splunk/9.4.0/Security/ConfigureandinstallcertificatesforLogObserver?ref=hk to configure and install certificates in Splunk Enterprise for Splunk Log Observer Connect but getting some error mentioned below. I have generated myFinalCert.pem as per the document mentioned above, below are the server.conf and web.conf configuration. # cat ../etc/system/local/server.conf [general] serverName = ip-xxxx.us-west-2.compute.internal pass4SymmKey = $7$IHXMpPIvtTGnxEusRYk62AjBIizAQosZq0YXtUg== [sslConfig] serverCert = /opt/splunk/etc/auth/sloccerts/myFinalCert.pem requireClientCert = false sslPassword = $7$vboieDG2v4YFg8FbYxW8jDji6woyDylOKWLe8Ow== [lmpool:auto_generated_pool_download-trial] description = auto_generated_pool_download-trial peers = * quota = MAX stack_id = download-trial [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder peers = * quota = MAX stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free peers = * quota = MAX stack_id = free # cat ../etc/system/local/web.conf [expose:tlPackage-scimGroup] methods = GET pattern = /identity/provisioning/v1/scim/v2/Groups/* [expose:tlPackage-scimGroups] methods = GET pattern = /identity/provisioning/v1/scim/v2/Groups [expose:tlPackage-scimUser] methods = GET,PUT,PATCH,DELETE pattern = /identity/provisioning/v1/scim/v2/Users/* [expose:tlPackage-scimUsers] methods = GET pattern = /identity/provisioning/v1/scim/v2/Users [settings] enableSplunkWebSSL = true serverCert = /opt/splunk/etc/auth/sloccerts/myFinalCert.pem # After making changes to server.conf, I am able to restart the splunkd service but after making changes to the web.conf, restarting the splunkd service gets stuck, below are logs related to it: # ./splunk restart splunkd is not running. [FAILED] Splunk> The IT Search Engine. Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes... Validated: _audit _configtracker _dsappevent _dsclient _dsphonehome _internal _introspection _metrics _metrics_rollup _telemetry _thefishbucket history main sim_metrics statsd_udp_8125_5_dec summary Done Checking filesystem compatibility... Done Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from '/opt/splunk/splunk-9.3.2-d8bb32809498-linux-2.6-x86_64-manifest' All installed files intact. Done All preliminary checks passed. Starting splunk server daemon (splunkd)... PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security Done [ OK ] Waiting for web server at https://127.0.0.1:8000 to be available...............................WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Please let me know if I am missing some thing. Thanks
Hi, The Mimecast App gets events for most of the activity that occurs in the solution but does not give the option to get archive events. Does anybody know if they plan to add that functionality soo... See more...
Hi, The Mimecast App gets events for most of the activity that occurs in the solution but does not give the option to get archive events. Does anybody know if they plan to add that functionality soon? Just in case so I do not have to develop that part on my own. I refer to those two API calls: https://integrations.mimecast.com/documentation/endpoint-reference/logs-and-statistics/get-archive-message-view-logs/ https://integrations.mimecast.com/documentation/endpoint-reference/logs-and-statistics/get-archive-search-logs/ The rest of the things are included in the current version 5.2.0: And no, the events generated when someone reads the content of an email are not stored with the Audit events. Thanks!
Hi I need help I have just updated my indexer cluster composed of 4 windows 2022 servers, to the new version of Splunk 9.4.0. As always I follow the update procedure, but this time one of my 4 serv... See more...
Hi I need help I have just updated my indexer cluster composed of 4 windows 2022 servers, to the new version of Splunk 9.4.0. As always I follow the update procedure, but this time one of my 4 servers refuses to update, it makes a rollback each time. I check the installations failed logs and noticed that the KVstrore was failing to update! Can anyone help me fix this problem? Thanks for your help.
I've been researching this for the last 30 minutes and can't find anything to let you read that file.  Everything is around the conf files only, so not even scripts or such.  You could look into a da... See more...
I've been researching this for the last 30 minutes and can't find anything to let you read that file.  Everything is around the conf files only, so not even scripts or such.  You could look into a dashboard with a custom javascript call maybe but that is outside my wheelhouse to even know if that is possible.
Hello, We have a lookup csv file: 1 million records (data1); and a kvstore: 3 million records (data2). We need to compare a street address in data2 with a fuzzy match of the street address in data1 ... See more...
Hello, We have a lookup csv file: 1 million records (data1); and a kvstore: 3 million records (data2). We need to compare a street address in data2 with a fuzzy match of the street address in data1 - the bold red text below -returning the property owner. Ex" data2 street address:    123 main street  data1 street address:    123 main street apt 13 We ran a regular lookup command and this took well over 7 hours. We have tried creating a sub-address (data1a) removing the apt/unit numbers, but still a 7 hour search. Plus if there is more than one apt/unit at the address, there might be more than one property owner. This is why a fuzzy-type compare is what we are looking for. Hope my explanation is clear. Ask if not. Thanks and God bless, Genesius (Merry Christmas and Happy Holidays)
I have a client who wants to share the Readme file in their app with end users so that they can reference this in the UI. Seems reasonable and prevents them having to duplicate content into a view. O... See more...
I have a client who wants to share the Readme file in their app with end users so that they can reference this in the UI. Seems reasonable and prevents them having to duplicate content into a view. Otherwise the readme file is only available to admins who have CLI access. I have tried using the REST endpoint to locate the file, I have checked that the metadata allows read, it is just the path and actual capability I am unclear on. https://<splunk-instance>/en-GB/manager/<redacted>/apps/README.md Thanks  
Hi @Dawoo, how are you? You can follow the documentation steps to install UF on MacOS