All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

So is there no way to have it match the first and last strings while excluding a certain middle part? Something like: "[string1, regex to exclude middle part, string2]" I mean it's pretty clear... See more...
So is there no way to have it match the first and last strings while excluding a certain middle part? Something like: "[string1, regex to exclude middle part, string2]" I mean it's pretty clear with the matching string and regex that the point is to match everything but the changing IP.  C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe" "Resolve-DnsName 0.0.0.0 | Select-Object -Property NameHost
MAX_DAYS_AGO - I would set this on the indexer? (our setup is UF -> Indexer)  Will that be a global setting for all incoming data? Kind Regards Andre
How do you run a match a field ID between two indexes? without using a sub search(due to limit of 10000 results) without using Join command resource intensive and there is about 140,000+ results s... See more...
How do you run a match a field ID between two indexes? without using a sub search(due to limit of 10000 results) without using Join command resource intensive and there is about 140,000+ results so running join will take forever to load. I tried the following below but doesn't seem to work: index=xxx  source type=xxx  | eval source_index="a" | append [search index=summary_index | eval source_index="b" | fields ID] | stats values(source_index) as sources by trace | where mvcount(sources) > 1 | timechart span=1h values(count) AS "Customers per Hour" Trying to match between the main search and the summary search Unique ID accounts field and if it matches we want it to give us a count of how many ID there is which will translate customers per hour.
Ok. Firstly, it's bad syntax. The syntax (in your case) should be | regex field!="regex" while you have | regex field!="regex" "something else" And secondly, the regex provided as a string is sub... See more...
Ok. Firstly, it's bad syntax. The syntax (in your case) should be | regex field!="regex" while you have | regex field!="regex" "something else" And secondly, the regex provided as a string is subject to the normal string escaping rules. So your "\\W" becomes efectively a regex for \W, which means "any non-word character" and so on. You should also escape the backslashes for the actual regex classes. So instead of "\d" you should use "\\d" and so on.
On the page you downloaded the trial version from you should have a button or link to older versions. But be aware that older versions will run out of support sooner than the current one. Anyway, RH... See more...
On the page you downloaded the trial version from you should have a button or link to older versions. But be aware that older versions will run out of support sooner than the current one. Anyway, RHEL7 reached end of normal maintenance over a year ago which means no updates anymore (even security ones). Splunk doesn't much care about python version in your OS since it brings its own one. And finally - what do you mean by "I can't access http://my_splunk:8000"? Do you get errors of any kind? What are they? Is your traffic filtered in any way? Can you connect from the Splunk server itself (with curl, for example)? Have you verified that the process is listening on that port? Do you have the port open on your os-level firewall?
Your data is ugly. But almost all email data is ugly. So my solution will be even uglier (and horribly inefficient). | makeresults | eval attachments = mvappend("attachments", "doc1.pdf", "abc123"... See more...
Your data is ugly. But almost all email data is ugly. So my solution will be even uglier (and horribly inefficient). | makeresults | eval attachments = mvappend("attachments", "doc1.pdf", "abc123", "doc2.pdf", "def456", "doc3.bla", "ghx789") | eval file_name=mvmap(split(replace(mvjoin(mvindex(attachments,1,mvcount(attachments)),"|"),"([^|]+)\|([^|]+)\|","\\1|\\2||"),"||"),replace(attachments,"\|.*","")) | makeresults | eval attachments = mvappend("attachments", "doc1.pdf", "abc123", "doc2.pdf", "def456", "doc3.bla", "ghx789") | eval file_hash=mvmap(split(replace(mvjoin(mvindex(attachments,1,mvcount(attachments)),"|"),"([^|]+)\|([^|]+)\|","\\1|\\2||"),"||"),replace(attachments,".*\|","")) You might want to adjust the separators from | and ||.
Hello, We try to see whether splunk can be our solution for dashboard. I download the trial version which is 9.4.2(I do see the system support for the version 9.4.2 is RHEL8 or RHEL9.) Is there an... See more...
Hello, We try to see whether splunk can be our solution for dashboard. I download the trial version which is 9.4.2(I do see the system support for the version 9.4.2 is RHEL8 or RHEL9.) Is there any other trial version i can download to try? (Our device use RHEL7 and Python 2) I am able to install the splunk 9.4.2 in our system and run the splunk start but I cannot access the UI with the address: http: {domain-name}:8000.    
Download Splunk App for Lookup File Editing app. Nn Lookups menu, select All and search for mc_notes. On Actions menu, click the magnifier button to search the mc_notes lookup. A prompt will show up ... See more...
Download Splunk App for Lookup File Editing app. Nn Lookups menu, select All and search for mc_notes. On Actions menu, click the magnifier button to search the mc_notes lookup. A prompt will show up asking you to create a lookup transform. Add the name that you want and click Create transform. Open a new search and search | inputlookup mc_notes to show mv_notes content.
@richgalloway  Rule looking up process info in general:  | tstats `content_summariesonly` values(Processes.process_id) as process_id, values(Processes.parent_process_id) as parent_process_id valu... See more...
@richgalloway  Rule looking up process info in general:  | tstats `content_summariesonly` values(Processes.process_id) as process_id, values(Processes.parent_process_id) as parent_process_id values(Processes.process) as process min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where `process_powershell` (Processes.process="* -ex*" AND Processes.process="* bypass *") by Processes.action Processes.dest Processes.original_file_name Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path Processes.process Processes.process_exec Processes.process_guid Processes.process_hash Processes.process_id Processes.process_integrity_level Processes.process_name Processes.process_path Processes.user Processes.user_id Processes.vendor_product | `drop_dm_object_name(Processes)` | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | 'exceptions` | stats values(dest) count by process, parent_process Macro (exceptions):  search process != "blah" | regex process !="^C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\powershell.exe" "Resolve-DnsName \b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b \| Select-Object -Property NameHost$"
I know this question is a little old, but I'm going to post something we did.  I have a dashboard in Dashboard Studio with a handful of inputs and wanted to reset them on one click. I basically just... See more...
I know this question is a little old, but I'm going to post something we did.  I have a dashboard in Dashboard Studio with a handful of inputs and wanted to reset them on one click. I basically just made a refresh button in the dashboard that loads the dashboard back to default. I created an image viz like this:  "type": "splunk.image", "options": { "preserveAspectRatio": true, "src": "data&amp;colon;image/png;base64,iVBOR....." }, "eventHandlers": [ { "type": "drilldown.customUrl", "options": { "url": "/app/&lt;app&gt;/&lt;dashboard_title&gt;" } } ] and the URL was basically just the dashboard url. You could have the options in it, as well, if you wanted to preserver some of the tokens and only reset a few (like adding  <dashboard_title>?form.time_token.earliest=%40d&form.time_token.latest=now to keep a specific earliest/latest that may have been updated in the dashboard.) Hope this helps - it isn't perfect but it works. 
I'm working with a Splunk Enterprise cluster deployed with the splunk-enterprise Helm Chart. I'm trying to install Amazon's CloudWatch Agent onto my Splunk pods, to send Splunk application logs to Cl... See more...
I'm working with a Splunk Enterprise cluster deployed with the splunk-enterprise Helm Chart. I'm trying to install Amazon's CloudWatch Agent onto my Splunk pods, to send Splunk application logs to CloudWatch. I decided to try to do this by defining Ansible pre tasks and setting them in my Helm values.yaml, for example: clusterManager:     defaults:         ansible_pre_tasks:             - 'file:///mnt/playbooks/install_cloudwatch_agent.yml' I got my pre tasks working, but they're failing.   At first I tried to install CloudWatch Agent from yum, but this failed because the Python dnf module was missing. Actually, it looks like yum and dnf aren't installed at all in the splunk Docker image.   Then I tried to just download the RPM and install that, but this failed because I didn't have permission to get a transaction lock with rpm. I tried to solve the permissions issue by setting  become_user "{{ privileged_user }}" on my task, but this didn't work either, nor could I become root.   Are splunk-ansible pre tasks and post tasks an appropriate way to install additional supporting services onto the Splunk Enterprise pods like this? If so, are there any examples showing how to do it? If not, is there some other approach that would be a better fit?
Please put the *real* and *complete* macro definition in a code block so we know exactly what we're working with and can test it in our own sandboxes.  Please also include how the macro is used in a ... See more...
Please put the *real* and *complete* macro definition in a code block so we know exactly what we're working with and can test it in our own sandboxes.  Please also include how the macro is used in a query.
Oh apologies for the misunderstanding. That's not how it really is. I just have that as a placeholder for the real field. It's like this: | regex fieldname != 
Have the same issue, did you ever find a fix?
[field] is improper syntax for the regex command.  Use the field name by itself.  If it's an argument to a macro then use $field$.
This is a really old question but I want to share what I've learned after a recent incident led us to looking at our search artifact count and discovering a huge spike after a config change.  So the... See more...
This is a really old question but I want to share what I've learned after a recent incident led us to looking at our search artifact count and discovering a huge spike after a config change.  So there is [currently] no setting you can update to fix the TTL config for DMAs. DMAs have a hard-coded TTL that is 300s unless there are error messages from the accelerated search, in which case it goes to 86400s. It will only go back down to 300s if the previous run had no errors.  After this incident, we filed an enhancement ticket to make this configurable. It was just filed, though, and hasn't been triaged and worked on, so no timeline at all on when it would be configurable.  The best course of action for now would be to figure out what the errors are in the DMAs (which is visible in the Data Model page, when you expand the data model) and resolve them so the errors stop. Or just clear the directory manually or with some script.
Hey everyone, I'm doing testing regarding ingesting Zscaler ZPA Logs into Splunk using LSS, I'd like any assistance and any relevant configurations that could assist me.
Looking for SPL that will give me the ID Cost by month, only grabbing the last event (_time) for that month.  Sample data below. I have a system that updates cost daily for the same ID. Looking for g... See more...
Looking for SPL that will give me the ID Cost by month, only grabbing the last event (_time) for that month.  Sample data below. I have a system that updates cost daily for the same ID. Looking for guidence before I venture down a wrong path. Sample data below.   Thank you!   bill_date ID Cost _time 6/1/25 1 1.24 2025-06-16T12:42:41.282-04:00 6/1/25 1 1.4 2025-06-16T12:00:41.282-04:00 5/1/25 1 2.5 2025-06-15T12:42:41.282-04:00 5/1/25 1 2.2 2025-06-14T12:00:41.282-04:00 5/1/25 2 3.2 2025-06-14T12:42:41.282-04:00 5/1/25 2 3.3 2025-06-14T12:00:41.282-04:00 3/1/25 1 4.4 2025-06-13T12:42:41.282-04:00 3/1/25 1 5 2025-06-13T12:00:41.282-04:00 3/1/25 2 6 2025-06-13T12:42:41.282-04:00 3/1/25 2 6.3 2025-06-13T12:00:41.282-04:00
I have a question regarding how to handle a regex query in a macro. Below I have a regex similar to the one I'm doing that matches when i use a regex checker, but when I try and add it to a simple se... See more...
I have a question regarding how to handle a regex query in a macro. Below I have a regex similar to the one I'm doing that matches when i use a regex checker, but when I try and add it to a simple search macro in splunk it gives an error: Error: Error in 'SearchOperator:regex': Usage: regex <field> (=|!=) <regex>. Macro tied to the rule. Basically has a first part of a script, then IP address it ignores, and then a second part of the script. One below is really simplified but gets same error:  Regex Example: | regex [field] !="^C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\powershell.exe" "Resolve-DnsName \b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b \| Select-Object -Property NameHost$" String to check against in this example:  C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe" "Resolve-DnsName 0.0.0.0 | Select-Object -Property NameHost I feel like this should work, but maybe there is something I'm missing on how Splunk handles regex and how I need to tweak it.  Any info on this would be greatly appreciated.  Thanks. 
Hi @sawwinnaung , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors