All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

A thing of beauty marycordova - thank you!
Thanks for the reply @bowesmana  Yes, I would like to ignore special characters also if possible. Your regex will work if the requirement is to ignore the numeric digits in alphanumaric words but m... See more...
Thanks for the reply @bowesmana  Yes, I would like to ignore special characters also if possible. Your regex will work if the requirement is to ignore the numeric digits in alphanumaric words but my requirement is to completely ignore the words  that have numeric digits.
@ITWhisperer Thanks for sharing the regex. It is working for some of the examples but not for all. I think this is because I have not clearly explained the requirement. My requirement is to capture a... See more...
@ITWhisperer Thanks for sharing the regex. It is working for some of the examples but not for all. I think this is because I have not clearly explained the requirement. My requirement is to capture all the words that have letters only and completely ignore(reject) alphanumeric/numeric words & special characters. Also, I would like to extract full text , not limited to 12 words. Could you please share the regex and explanation also if possible? Sharing couple of examples where  regex is not working: 1)Exception message - CSR-a4cd725c-3d73-426c-b254-5e4f4adc4b26 - Generating exception because of multiple stage failure - abc_ELIGIBILITY" Output with regex - "Exception message - CSR" and for some other records it is coming as "Exception message - CSR-a4cd725c" Required Output - Exception Message CSR Generating exception because of multiple stage failure abc ELIGIBILITY 2)0013c5fb1737577541466 - Exception message - 0013c5fb1737577541466 - Generating exception because of multiple stage failure - abc_ELIGIBILITY Output - Exception message Required Output - Exception message Generating exception because of multiple stage failure abc_ELIGIBILITY 3) b187c4411737535464656 - Exception message - b187c4411737535464656 - Exception in abc module. Creating error response - b187c4411737535464656 - Response creation couldn't happen for all the placements. Creating error response. Exception message - b187c4411737535464656 - Exception in abc module. Creating error response - b187c4411737535464656 - Response Required Output - Exception message Exception in abc module. Creating error response Response creation couldn't happen for all the placements. Creating error response.   
Hi Splunk community! I just want to let you know that I worked with one of our most senior engineers to update information in Splunk docs about NaN -- take a look at the info about NaN in the isnum ... See more...
Hi Splunk community! I just want to let you know that I worked with one of our most senior engineers to update information in Splunk docs about NaN -- take a look at the info about NaN in the isnum and isstr sections in Informational functions in the Splunk platform Search Reference.   I know that playing around with NaN is irresistible, especially for our techiest Splunk experts, but the general advice from the sr engineer is to avoid using NaN in Splunk searches if possible unless you really really know what you're doing.   --Kristina
I totally agree this, but when you must keep business up and running and there is this things we called physic, we haven’t any other option than wait. Here is old story about it https://web.mit.edu/je... See more...
I totally agree this, but when you must keep business up and running and there is this things we called physic, we haven’t any other option than wait. Here is old story about it https://web.mit.edu/jemorris/humor/500-miles
Oooooh, I gotcha. Thank you for the info! If I don't have a deployment server for the UFs, how would I go about updating their configs to drop the event codes I don't want coming into the index?
Well... there are two different views on that Technically you can do several things which aren't officially supported and which - while they do work - can get you into a "sorry, that's an unsuppo... See more...
Well... there are two different views on that Technically you can do several things which aren't officially supported and which - while they do work - can get you into a "sorry, that's an unsupported setup" situation. But yes - if you have a cluster on RHEL8 and want to upgrade/migrate it to RHEL9, short of turning the whole system down and upgrading all servers at once you have no other option than to have some servers on one system, some or another. But I'd definitely go for minimizing time the cluster was in that state.
Running a clean install on RHEL 8.9, kernel version 4.18.0-553.34.1.el8_10.x86_64. Followed the instructions on the install page for the soar-prepare-system command, not running clustered, default op... See more...
Running a clean install on RHEL 8.9, kernel version 4.18.0-553.34.1.el8_10.x86_64. Followed the instructions on the install page for the soar-prepare-system command, not running clustered, default options for everything, created the phantom user with no trouble. /opt/splunk-soar is owned by phantom, ran the soar-install command as phantom, got through everything fine until the GitRepos step, hit this error: "INSTALL: GitRepos Configuring default playbook repos Failed to bootstrap playbook repos Install failed." Detailed error logs look kind of ugly, but seeing this: File \"/opt/splunk-soar/usr/python39/lib/python3.9/site-packages/git/cmd.py\", line 1388, in execute", " raise GitCommandError(redacted_command, status, stderr_value, stdout_value)", "git.exc.GitCommandError: Cmd('git') failed due to: exit code(128)", " cmdline: git ls-remote --heads https://github.com/phantomcyber/playbooks", " stderr: 'fatal: unable to access 'https://github.com/phantomcyber/playbooks/': SSL certificate problem: unable to get local issuer certificate'"], "time_elapsed_since_start": 6.000021, "time_elapsed_since_operation_start": 4.386305} Any thoughts on how to get it to get the local issuer certificate, or another way around the issue? Thanks.
No, it works a bit differently. You can check this thread to see how various stages of event processing work but I'm not sure if it's clear enough at this point so I'll add a few words to it. input... See more...
No, it works a bit differently. You can check this thread to see how various stages of event processing work but I'm not sure if it's clear enough at this point so I'll add a few words to it. inputs.conf doesn't affect indexing. Indexing is what's happening after an event has been read by the input, got processed through whole ingestion pipeline and got to the indexing stage where it's getting written do disk (sometimes, depending on context, people use the term indexing for the whole ingestion pipeline after the input phase). inputs.conf only configure... well, inputs. If you have UFs, each of those UFs have local wineventlog inputs which read events from their own local EventLog. Those events are (only partially) processed by the UF processed and are forwarded to the downstream component (either an intermediate forwarder or an indexer) using splunktcp:// output. And that downstream component receives them on its splunktcp:// input. So the wineventlog:// input settings don't apply to those events. So in your case the blacklist entries should work but they will only apply to events you're pulling locally from your Splunk server's EventLog. If you want to not have those events ingested you need to either blacklist them at each UF's input level (and that's the usual way to do it) or bend over backwards to create props/transforms to filter those events out.
Hello,   Attempting to upgrade our test environment from 9.3.2 to 9.4.0 on Windows Server 2019 fails with the following message found in splunk.log: <time> C:\windows\system32\cmd.exe /c "C:\Wi... See more...
Hello,   Attempting to upgrade our test environment from 9.3.2 to 9.4.0 on Windows Server 2019 fails with the following message found in splunk.log: <time> C:\windows\system32\cmd.exe /c "C:\Windows\system32\icacls "C:\Program Files\Splunk" /grant "LocalSystem:(OI)(CI)(F)" /T /C >> "<out to %temp%\splunk.log>" 2>&1" LocalSystem: No mapping between account names and security IDs was done. Successfully processed 0 files;  Failed processing 1 files. Seems pretty straightforward. Attempting to grant Full Access/Control to all files and subdirectories... EXCEPT... It almost certainly should be "NT AUTHORITY\System", not "LocalSystem". Pretty sure this is just a Linux vs Windows nomenclature thing. Are there any suggestions for forcing to permission as the correct account or do I need to open a support ticket to have this fixed in the next release?
Even in cluster you could use different versions if you haven’t any other options. But try to limit this time as short as possible. In practice it means the time which you need to update all nodes in... See more...
Even in cluster you could use different versions if you haven’t any other options. But try to limit this time as short as possible. In practice it means the time which you need to update all nodes in cluster to same version. Same is valid for OS and Splunk versions too. I think that easiest way this can do by adding new servers with a new OS version BUT the same splunk version than you have in your cluster’s other nodes.  Here is an old post how this can do https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to-different-server-same-platform-Linux-but/m-p/538069#M4823. This was a play how I migrate distributed splunk environment into a new service provider into newer OS without service breaks. The migration took couple of weeks but less than month.
There are different factors at play here. Of course Splunk must be supported on the systems used. That's obvious. If you use clusters, the docs state that the same system/version is required for al... See more...
There are different factors at play here. Of course Splunk must be supported on the systems used. That's obvious. If you use clusters, the docs state that the same system/version is required for all nodes of the cluster. That is a bit vague and there has been a lot of discussion about what it actually means but just to be on the safe side you should stick to the same release across all nodes of an indexer cluster or search head cluster. There is of course the general issue of maintainability but that's a double-edged sword. Uniform environment is of course easier to maintain but there's less work if you don't have to upgrade your systems soon. So it's your call. There is no requirement that whole environment must be using the same OS release (and there can't be given thay you can have separate search-heads (or even SHCs) operated by - for example - different divisions of the company searching against the same indexers. Or you can have many different HFs doing different modular inputs. Some of them even could be windows-based. I managed environments where - for example - some servers were CentOS and some were SUSE and nothing blew up So as long as you're not mixing systems across a cluster you should be fine.  
Can you paste your simple xml dashboard inside code block </> ? Insert at least part where you are getting inputs and how you are generating dropdowns.
Running Splunk Enterprise on Windows Server 2016. Ingesting from Universal Forwarders on our Windows clients. There are a handful of very noisy event codes that I don't want to ingest. I was under th... See more...
Running Splunk Enterprise on Windows Server 2016. Ingesting from Universal Forwarders on our Windows clients. There are a handful of very noisy event codes that I don't want to ingest. I was under the impression that using a blacklist on the server's inputs.conf would just drop that data from being ingested, but I'm still seeing them when I search for the event codes.
    I've created two dropdown menu that takes in tokens in my search 1 drop down I get to select server (Token $server$) 2nd drop down to help filter the dashboard ... See more...
    I've created two dropdown menu that takes in tokens in my search 1 drop down I get to select server (Token $server$) 2nd drop down to help filter the dashboard into individual applications number I have token($appnumber$) the host usually appears as host = servername-appnumber I tried this: host="$server$-$appnumber$" what am I doing wrong? and advice or help would be appreciated
Ok, basically your environment is broken and it doesn’t fulfill any Splunk’s requirements for SHC! I said that sooner or later you must rebuild it from scratch. I suggest that you will do it as soon ... See more...
Ok, basically your environment is broken and it doesn’t fulfill any Splunk’s requirements for SHC! I said that sooner or later you must rebuild it from scratch. I suggest that you will do it as soon as possible. Now you have those apps, congratulations, data etc still there and you could utilize those with a new environment without issues. But if your environment will collapse then you probably lose at least some of that data. if you don’t know how to do this, pleas ask some local Splunk partner or Splunk PS to do it. There are good instructions on splunk docs how you could migrate from individual SH to SHC. You could use those as starting point,  but probably you need to modify those somehow depending how your “SHC” is currently working.
Wait a second. What is your architecture? Because I have a feeling you're trying to do something different than you think. Are you running Splunk instance on Windows and ingesting local events? Or ar... See more...
Wait a second. What is your architecture? Because I have a feeling you're trying to do something different than you think. Are you running Splunk instance on Windows and ingesting local events? Or are you expecting to filter events forwarded by remote forwarders?
Thanks for your help. I guess, it just needed a clean installation.
Anchoring the regex to the beginning of the string is not needed and actually significantly impacts the performance (match in 154 steps vs. 29 without the "^.*" part).
As @dural_yyz says, the regex is easy - you can use this with the rex command https://docs.splunk.com/Documentation/Splunk/9.4.0/SearchReference/rex