All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Well... there are two different views on that Technically you can do several things which aren't officially supported and which - while they do work - can get you into a "sorry, that's an unsuppo... See more...
Well... there are two different views on that Technically you can do several things which aren't officially supported and which - while they do work - can get you into a "sorry, that's an unsupported setup" situation. But yes - if you have a cluster on RHEL8 and want to upgrade/migrate it to RHEL9, short of turning the whole system down and upgrading all servers at once you have no other option than to have some servers on one system, some or another. But I'd definitely go for minimizing time the cluster was in that state.
Running a clean install on RHEL 8.9, kernel version 4.18.0-553.34.1.el8_10.x86_64. Followed the instructions on the install page for the soar-prepare-system command, not running clustered, default op... See more...
Running a clean install on RHEL 8.9, kernel version 4.18.0-553.34.1.el8_10.x86_64. Followed the instructions on the install page for the soar-prepare-system command, not running clustered, default options for everything, created the phantom user with no trouble. /opt/splunk-soar is owned by phantom, ran the soar-install command as phantom, got through everything fine until the GitRepos step, hit this error: "INSTALL: GitRepos Configuring default playbook repos Failed to bootstrap playbook repos Install failed." Detailed error logs look kind of ugly, but seeing this: File \"/opt/splunk-soar/usr/python39/lib/python3.9/site-packages/git/cmd.py\", line 1388, in execute", " raise GitCommandError(redacted_command, status, stderr_value, stdout_value)", "git.exc.GitCommandError: Cmd('git') failed due to: exit code(128)", " cmdline: git ls-remote --heads https://github.com/phantomcyber/playbooks", " stderr: 'fatal: unable to access 'https://github.com/phantomcyber/playbooks/': SSL certificate problem: unable to get local issuer certificate'"], "time_elapsed_since_start": 6.000021, "time_elapsed_since_operation_start": 4.386305} Any thoughts on how to get it to get the local issuer certificate, or another way around the issue? Thanks.
No, it works a bit differently. You can check this thread to see how various stages of event processing work but I'm not sure if it's clear enough at this point so I'll add a few words to it. input... See more...
No, it works a bit differently. You can check this thread to see how various stages of event processing work but I'm not sure if it's clear enough at this point so I'll add a few words to it. inputs.conf doesn't affect indexing. Indexing is what's happening after an event has been read by the input, got processed through whole ingestion pipeline and got to the indexing stage where it's getting written do disk (sometimes, depending on context, people use the term indexing for the whole ingestion pipeline after the input phase). inputs.conf only configure... well, inputs. If you have UFs, each of those UFs have local wineventlog inputs which read events from their own local EventLog. Those events are (only partially) processed by the UF processed and are forwarded to the downstream component (either an intermediate forwarder or an indexer) using splunktcp:// output. And that downstream component receives them on its splunktcp:// input. So the wineventlog:// input settings don't apply to those events. So in your case the blacklist entries should work but they will only apply to events you're pulling locally from your Splunk server's EventLog. If you want to not have those events ingested you need to either blacklist them at each UF's input level (and that's the usual way to do it) or bend over backwards to create props/transforms to filter those events out.
Hello,   Attempting to upgrade our test environment from 9.3.2 to 9.4.0 on Windows Server 2019 fails with the following message found in splunk.log: <time> C:\windows\system32\cmd.exe /c "C:\Wi... See more...
Hello,   Attempting to upgrade our test environment from 9.3.2 to 9.4.0 on Windows Server 2019 fails with the following message found in splunk.log: <time> C:\windows\system32\cmd.exe /c "C:\Windows\system32\icacls "C:\Program Files\Splunk" /grant "LocalSystem:(OI)(CI)(F)" /T /C >> "<out to %temp%\splunk.log>" 2>&1" LocalSystem: No mapping between account names and security IDs was done. Successfully processed 0 files;  Failed processing 1 files. Seems pretty straightforward. Attempting to grant Full Access/Control to all files and subdirectories... EXCEPT... It almost certainly should be "NT AUTHORITY\System", not "LocalSystem". Pretty sure this is just a Linux vs Windows nomenclature thing. Are there any suggestions for forcing to permission as the correct account or do I need to open a support ticket to have this fixed in the next release?
Even in cluster you could use different versions if you haven’t any other options. But try to limit this time as short as possible. In practice it means the time which you need to update all nodes in... See more...
Even in cluster you could use different versions if you haven’t any other options. But try to limit this time as short as possible. In practice it means the time which you need to update all nodes in cluster to same version. Same is valid for OS and Splunk versions too. I think that easiest way this can do by adding new servers with a new OS version BUT the same splunk version than you have in your cluster’s other nodes.  Here is an old post how this can do https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to-different-server-same-platform-Linux-but/m-p/538069#M4823. This was a play how I migrate distributed splunk environment into a new service provider into newer OS without service breaks. The migration took couple of weeks but less than month.
There are different factors at play here. Of course Splunk must be supported on the systems used. That's obvious. If you use clusters, the docs state that the same system/version is required for al... See more...
There are different factors at play here. Of course Splunk must be supported on the systems used. That's obvious. If you use clusters, the docs state that the same system/version is required for all nodes of the cluster. That is a bit vague and there has been a lot of discussion about what it actually means but just to be on the safe side you should stick to the same release across all nodes of an indexer cluster or search head cluster. There is of course the general issue of maintainability but that's a double-edged sword. Uniform environment is of course easier to maintain but there's less work if you don't have to upgrade your systems soon. So it's your call. There is no requirement that whole environment must be using the same OS release (and there can't be given thay you can have separate search-heads (or even SHCs) operated by - for example - different divisions of the company searching against the same indexers. Or you can have many different HFs doing different modular inputs. Some of them even could be windows-based. I managed environments where - for example - some servers were CentOS and some were SUSE and nothing blew up So as long as you're not mixing systems across a cluster you should be fine.  
Can you paste your simple xml dashboard inside code block </> ? Insert at least part where you are getting inputs and how you are generating dropdowns.
Running Splunk Enterprise on Windows Server 2016. Ingesting from Universal Forwarders on our Windows clients. There are a handful of very noisy event codes that I don't want to ingest. I was under th... See more...
Running Splunk Enterprise on Windows Server 2016. Ingesting from Universal Forwarders on our Windows clients. There are a handful of very noisy event codes that I don't want to ingest. I was under the impression that using a blacklist on the server's inputs.conf would just drop that data from being ingested, but I'm still seeing them when I search for the event codes.
    I've created two dropdown menu that takes in tokens in my search 1 drop down I get to select server (Token $server$) 2nd drop down to help filter the dashboard ... See more...
    I've created two dropdown menu that takes in tokens in my search 1 drop down I get to select server (Token $server$) 2nd drop down to help filter the dashboard into individual applications number I have token($appnumber$) the host usually appears as host = servername-appnumber I tried this: host="$server$-$appnumber$" what am I doing wrong? and advice or help would be appreciated
Ok, basically your environment is broken and it doesn’t fulfill any Splunk’s requirements for SHC! I said that sooner or later you must rebuild it from scratch. I suggest that you will do it as soon ... See more...
Ok, basically your environment is broken and it doesn’t fulfill any Splunk’s requirements for SHC! I said that sooner or later you must rebuild it from scratch. I suggest that you will do it as soon as possible. Now you have those apps, congratulations, data etc still there and you could utilize those with a new environment without issues. But if your environment will collapse then you probably lose at least some of that data. if you don’t know how to do this, pleas ask some local Splunk partner or Splunk PS to do it. There are good instructions on splunk docs how you could migrate from individual SH to SHC. You could use those as starting point,  but probably you need to modify those somehow depending how your “SHC” is currently working.
Wait a second. What is your architecture? Because I have a feeling you're trying to do something different than you think. Are you running Splunk instance on Windows and ingesting local events? Or ar... See more...
Wait a second. What is your architecture? Because I have a feeling you're trying to do something different than you think. Are you running Splunk instance on Windows and ingesting local events? Or are you expecting to filter events forwarded by remote forwarders?
Thanks for your help. I guess, it just needed a clean installation.
Anchoring the regex to the beginning of the string is not needed and actually significantly impacts the performance (match in 154 steps vs. 29 without the "^.*" part).
As @dural_yyz says, the regex is easy - you can use this with the rex command https://docs.splunk.com/Documentation/Splunk/9.4.0/SearchReference/rex  
Seems like your problem is that the ID in your XML is wrong - you have duplicated  <p id="personal_valueA_token_id">$personal_valueA_token$</p> <p id="personal_valueB_token_id">$personal... See more...
Seems like your problem is that the ID in your XML is wrong - you have duplicated  <p id="personal_valueA_token_id">$personal_valueA_token$</p> <p id="personal_valueB_token_id">$personal_valueB_token$</p> Your original duplicates the personal_valueA_token_id <p id="personal_valueA_token_id">$personal_valueA_token$</p> <p id="personal_valueA_token_id">$personal_valueB_token$</p>
In Developer tools you posted example gives this  
The regex is simple enough. ^.*\sSetting\sconnector\s(?<connector_event>[^\s]+).*$
前往设置 -> 用户界面 -> 导航菜单,然后编辑相关应用的菜单。
@mohsplunking  - Errors definitely seems to be related to SSL certificate file or SSL certificate configuration in Splunk. * Its more broader topic to tell exactly what's wrong. * But need to check... See more...
@mohsplunking  - Errors definitely seems to be related to SSL certificate file or SSL certificate configuration in Splunk. * Its more broader topic to tell exactly what's wrong. * But need to check SSL certs configured on Splunk and then for those SSL files check expiration date and validation of cert file. * Make sure Splunk config not having any issues.   I hope this helps!!!
Hi Community, please help me how to extract BOLD/underlines value from below string: [2025-01-22 13:33:33,899] INFO Setting connector ABC_SOMECONNECTOR_CHANGE_EVENT_SRC_Q_V1 state to PAUSED (org.apa... See more...
Hi Community, please help me how to extract BOLD/underlines value from below string: [2025-01-22 13:33:33,899] INFO Setting connector ABC_SOMECONNECTOR_CHANGE_EVENT_SRC_Q_V1 state to PAUSED (org.apache.kafka.connect.runtime.Worker:1391)