All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi  I have Created a Splunk Addon builder using Splunk Enterprise version 9. And i installed in Splunk Cloud now i am facing some issues with addon , how can i check the logs of this addon in splun... See more...
Hi  I have Created a Splunk Addon builder using Splunk Enterprise version 9. And i installed in Splunk Cloud now i am facing some issues with addon , how can i check the logs of this addon in splunk cloud?Pls assist.
@Bhart1 wrote: So is there no way to have it match the first and last strings while excluding a certain middle part? Something like: "[string1, regex to exclude middle part, string2]" I mea... See more...
@Bhart1 wrote: So is there no way to have it match the first and last strings while excluding a certain middle part? Something like: "[string1, regex to exclude middle part, string2]" I mean it's pretty clear with the matching string and regex that the point is to match everything but the changing IP.  C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe" "Resolve-DnsName 0.0.0.0 | Select-Object -Property NameHost You can do that, and it's done all the time.  However, the regular expression MUST be a single quoted string.  Something like this. | regex process !="^C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\powershell.exe Resolve-DnsName \b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b \| Select-Object -Property NameHost$" I disagree with @PickleRick about the escaping.  I think you have that part right.
I'm trying to split a pair of rows with a pair of multivalued columns. The value in both columns is related to each position of the multivalued column. To make myself clear, I'm displaying the initia... See more...
I'm trying to split a pair of rows with a pair of multivalued columns. The value in both columns is related to each position of the multivalued column. To make myself clear, I'm displaying the initial result table, and below that is the table for the desired result. I tried mvexpand, but that doesn't give me the expected result. Example: I have rows like this: Domain Name Instance name Last Phone home Search execution time Domain1.com instance1.com                      instance2.com instance3.com            instance4.com             instance5.com             2022-02-28 2022-03-1 2022-03-2 2022-03-4 2022-03-5   And I would like to transform them into this: Domain Name Instance name Last Phone home Search execution time Domain1.com instance1.com 2022-02-28 2022-03-01 Domain1.com instance2.com 2022-02-28 2022-03-02 Domain1.com instance3.com 2022-02-28   Domain1.com instance4.com 2022-02-28 2022-03-04 Domain1.com instance5.com 2022-02-28 2022-03-05
Hi @Andre_  As @inventsekar mentioned, you could use MAX_DAYS_AGO as follows: == props.conf == # If within 3 days old. [WinEventLog] MAX_DAYS_AGO = 3 [XmlWinEventLog] MAX_DAYS_AGO = 3 This will... See more...
Hi @Andre_  As @inventsekar mentioned, you could use MAX_DAYS_AGO as follows: == props.conf == # If within 3 days old. [WinEventLog] MAX_DAYS_AGO = 3 [XmlWinEventLog] MAX_DAYS_AGO = 3 This will then only apply to XmlWinEventLog/WinEventLog  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @chrisboy68  How about using the bin command to bucket into 1 month blocks, then dedup on _time, or take first(fields) such as: index=main | bin _time span=1month | dedup _time | table bill_dat... See more...
Hi @chrisboy68  How about using the bin command to bucket into 1 month blocks, then dedup on _time, or take first(fields) such as: index=main | bin _time span=1month | dedup _time | table bill_date ID Cost _time or index=main | bin _time span=1month | stats first(bill_date) as bill_date, first(ID) as ID, first(Cost) as Cost by _time Or you could even look at timechart if useful.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Bedrohungsjäger  I would suggest checking out the ZScaler docs on ZPA logging to Splunk and Log Streaming Service (LSS) at https://help.zscaler.com/zpa/about-log-streaming-service which has deta... See more...
Hi @Bedrohungsjäger  I would suggest checking out the ZScaler docs on ZPA logging to Splunk and Log Streaming Service (LSS) at https://help.zscaler.com/zpa/about-log-streaming-service which has detailed docs and videos, and also checkout the PDF deployment guide at https://help.zscaler.com/downloads/zscaler-technology-partners/operations/zscaler-and-splunk-deployment-guide/Zscaler-Splunk-Deployment-Guide-FINAL.pdf  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Try something like this (index=xxx source type=xxx) OR (index=summary_index) | eventstats values(index) as sources by trace | where mvcount(sources) > 1 | timechart span=1h values(count) AS "Custom... See more...
Try something like this (index=xxx source type=xxx) OR (index=summary_index) | eventstats values(index) as sources by trace | where mvcount(sources) > 1 | timechart span=1h values(count) AS "Customers per Hour"
Hi @h2rr821  The 9.4.x release you have installed may well work on RHEL7, it is just that it is not supported by Splunk. You can currently download 9.2.x which is supported until Jan 31 2026 and do... See more...
Hi @h2rr821  The 9.4.x release you have installed may well work on RHEL7, it is just that it is not supported by Splunk. You can currently download 9.2.x which is supported until Jan 31 2026 and does support RHEL7. See https://www.splunk.com/en_us/legal/splunk-software-support-policy.html?locale=en_us#:~:text=Splunk%20Enterprise%20/%20Splunk%20Analytics%20for%20Hadoop%20/%20Splunk%20Light* for more info. Regarding the error, please can you confirm that there is no firewall between you and the Splunk instance, if so is it permitting your requests?  I presume that it is not on the same machine you are working on? Does the system show port 8000 being listened on (e.g. ss -ltn)  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @Cheng2Ready  You'd be much better off using a stats here i think, and loading in both searches at the start, something like this might work, but it would be good if you could confirm the field w... See more...
Hi @Cheng2Ready  You'd be much better off using a stats here i think, and loading in both searches at the start, something like this might work, but it would be good if you could confirm the field which links them? Is it trace? (index=xxx source type=xxx) OR (index=summary_index) | stats values(index) as sources by trace | where mvcount(sources) > 1  In your search you would struggle to achieve timechart because you dont have _time at this point? If possible please give us further info we can help with this.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
So is there no way to have it match the first and last strings while excluding a certain middle part? Something like: "[string1, regex to exclude middle part, string2]" I mean it's pretty clear... See more...
So is there no way to have it match the first and last strings while excluding a certain middle part? Something like: "[string1, regex to exclude middle part, string2]" I mean it's pretty clear with the matching string and regex that the point is to match everything but the changing IP.  C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe" "Resolve-DnsName 0.0.0.0 | Select-Object -Property NameHost
MAX_DAYS_AGO - I would set this on the indexer? (our setup is UF -> Indexer)  Will that be a global setting for all incoming data? Kind Regards Andre
How do you run a match a field ID between two indexes? without using a sub search(due to limit of 10000 results) without using Join command resource intensive and there is about 140,000+ results s... See more...
How do you run a match a field ID between two indexes? without using a sub search(due to limit of 10000 results) without using Join command resource intensive and there is about 140,000+ results so running join will take forever to load. I tried the following below but doesn't seem to work: index=xxx  source type=xxx  | eval source_index="a" | append [search index=summary_index | eval source_index="b" | fields ID] | stats values(source_index) as sources by trace | where mvcount(sources) > 1 | timechart span=1h values(count) AS "Customers per Hour" Trying to match between the main search and the summary search Unique ID accounts field and if it matches we want it to give us a count of how many ID there is which will translate customers per hour.
Ok. Firstly, it's bad syntax. The syntax (in your case) should be | regex field!="regex" while you have | regex field!="regex" "something else" And secondly, the regex provided as a string is sub... See more...
Ok. Firstly, it's bad syntax. The syntax (in your case) should be | regex field!="regex" while you have | regex field!="regex" "something else" And secondly, the regex provided as a string is subject to the normal string escaping rules. So your "\\W" becomes efectively a regex for \W, which means "any non-word character" and so on. You should also escape the backslashes for the actual regex classes. So instead of "\d" you should use "\\d" and so on.
On the page you downloaded the trial version from you should have a button or link to older versions. But be aware that older versions will run out of support sooner than the current one. Anyway, RH... See more...
On the page you downloaded the trial version from you should have a button or link to older versions. But be aware that older versions will run out of support sooner than the current one. Anyway, RHEL7 reached end of normal maintenance over a year ago which means no updates anymore (even security ones). Splunk doesn't much care about python version in your OS since it brings its own one. And finally - what do you mean by "I can't access http://my_splunk:8000"? Do you get errors of any kind? What are they? Is your traffic filtered in any way? Can you connect from the Splunk server itself (with curl, for example)? Have you verified that the process is listening on that port? Do you have the port open on your os-level firewall?
Your data is ugly. But almost all email data is ugly. So my solution will be even uglier (and horribly inefficient). | makeresults | eval attachments = mvappend("attachments", "doc1.pdf", "abc123"... See more...
Your data is ugly. But almost all email data is ugly. So my solution will be even uglier (and horribly inefficient). | makeresults | eval attachments = mvappend("attachments", "doc1.pdf", "abc123", "doc2.pdf", "def456", "doc3.bla", "ghx789") | eval file_name=mvmap(split(replace(mvjoin(mvindex(attachments,1,mvcount(attachments)),"|"),"([^|]+)\|([^|]+)\|","\\1|\\2||"),"||"),replace(attachments,"\|.*","")) | makeresults | eval attachments = mvappend("attachments", "doc1.pdf", "abc123", "doc2.pdf", "def456", "doc3.bla", "ghx789") | eval file_hash=mvmap(split(replace(mvjoin(mvindex(attachments,1,mvcount(attachments)),"|"),"([^|]+)\|([^|]+)\|","\\1|\\2||"),"||"),replace(attachments,".*\|","")) You might want to adjust the separators from | and ||.
Hello, We try to see whether splunk can be our solution for dashboard. I download the trial version which is 9.4.2(I do see the system support for the version 9.4.2 is RHEL8 or RHEL9.) Is there an... See more...
Hello, We try to see whether splunk can be our solution for dashboard. I download the trial version which is 9.4.2(I do see the system support for the version 9.4.2 is RHEL8 or RHEL9.) Is there any other trial version i can download to try? (Our device use RHEL7 and Python 2) I am able to install the splunk 9.4.2 in our system and run the splunk start but I cannot access the UI with the address: http: {domain-name}:8000.    
Download Splunk App for Lookup File Editing app. Nn Lookups menu, select All and search for mc_notes. On Actions menu, click the magnifier button to search the mc_notes lookup. A prompt will show up ... See more...
Download Splunk App for Lookup File Editing app. Nn Lookups menu, select All and search for mc_notes. On Actions menu, click the magnifier button to search the mc_notes lookup. A prompt will show up asking you to create a lookup transform. Add the name that you want and click Create transform. Open a new search and search | inputlookup mc_notes to show mv_notes content.
@richgalloway  Rule looking up process info in general:  | tstats `content_summariesonly` values(Processes.process_id) as process_id, values(Processes.parent_process_id) as parent_process_id valu... See more...
@richgalloway  Rule looking up process info in general:  | tstats `content_summariesonly` values(Processes.process_id) as process_id, values(Processes.parent_process_id) as parent_process_id values(Processes.process) as process min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where `process_powershell` (Processes.process="* -ex*" AND Processes.process="* bypass *") by Processes.action Processes.dest Processes.original_file_name Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path Processes.process Processes.process_exec Processes.process_guid Processes.process_hash Processes.process_id Processes.process_integrity_level Processes.process_name Processes.process_path Processes.user Processes.user_id Processes.vendor_product | `drop_dm_object_name(Processes)` | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | 'exceptions` | stats values(dest) count by process, parent_process Macro (exceptions):  search process != "blah" | regex process !="^C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\powershell.exe" "Resolve-DnsName \b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b \| Select-Object -Property NameHost$"
I know this question is a little old, but I'm going to post something we did.  I have a dashboard in Dashboard Studio with a handful of inputs and wanted to reset them on one click. I basically just... See more...
I know this question is a little old, but I'm going to post something we did.  I have a dashboard in Dashboard Studio with a handful of inputs and wanted to reset them on one click. I basically just made a refresh button in the dashboard that loads the dashboard back to default. I created an image viz like this:  "type": "splunk.image", "options": { "preserveAspectRatio": true, "src": "data&amp;colon;image/png;base64,iVBOR....." }, "eventHandlers": [ { "type": "drilldown.customUrl", "options": { "url": "/app/&lt;app&gt;/&lt;dashboard_title&gt;" } } ] and the URL was basically just the dashboard url. You could have the options in it, as well, if you wanted to preserver some of the tokens and only reset a few (like adding  <dashboard_title>?form.time_token.earliest=%40d&form.time_token.latest=now to keep a specific earliest/latest that may have been updated in the dashboard.) Hope this helps - it isn't perfect but it works. 
I'm working with a Splunk Enterprise cluster deployed with the splunk-enterprise Helm Chart. I'm trying to install Amazon's CloudWatch Agent onto my Splunk pods, to send Splunk application logs to Cl... See more...
I'm working with a Splunk Enterprise cluster deployed with the splunk-enterprise Helm Chart. I'm trying to install Amazon's CloudWatch Agent onto my Splunk pods, to send Splunk application logs to CloudWatch. I decided to try to do this by defining Ansible pre tasks and setting them in my Helm values.yaml, for example: clusterManager:     defaults:         ansible_pre_tasks:             - 'file:///mnt/playbooks/install_cloudwatch_agent.yml' I got my pre tasks working, but they're failing.   At first I tried to install CloudWatch Agent from yum, but this failed because the Python dnf module was missing. Actually, it looks like yum and dnf aren't installed at all in the splunk Docker image.   Then I tried to just download the RPM and install that, but this failed because I didn't have permission to get a transaction lock with rpm. I tried to solve the permissions issue by setting  become_user "{{ privileged_user }}" on my task, but this didn't work either, nor could I become root.   Are splunk-ansible pre tasks and post tasks an appropriate way to install additional supporting services onto the Splunk Enterprise pods like this? If so, are there any examples showing how to do it? If not, is there some other approach that would be a better fit?