All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a field called folder_path which gives the values as follows. folder_path \Device\XYZ\Users\user_A\AppData\program\Send to OneNote.lnk \Device\RTF\Users\user_B\AppData\prog... See more...
I have a field called folder_path which gives the values as follows. folder_path \Device\XYZ\Users\user_A\AppData\program\Send to OneNote.lnk \Device\RTF\Users\user_B\AppData\program\send to file.Ink   Now I wanted to extract the following fields from the field "folder_path" username file_destination user_A Send to OneNote.lnk user_B send to file.Ink   whereas for extracting username as shown in the example it is extracted after the string "Users\", Simmilarly for extracting file_destination as shown in the example it is extracted after the lastbackslash ?   trying a few ways but couldn't properly extract the fields since it has backslashes.
Hi,  one of indexers stops receiving events as the indexer queue is full. I check the splunkd.log, see lots of error message,  02-09-2023 09:49:02.354 +1100 ERROR pipeline [1807 indexerPipe_1] - ... See more...
Hi,  one of indexers stops receiving events as the indexer queue is full. I check the splunkd.log, see lots of error message,  02-09-2023 09:49:02.354 +1100 ERROR pipeline [1807 indexerPipe_1] - Runtime exception in pipeline=indexerPipe processor=indexer error='Unable to create directory /mnt/splunk_index_hot/_internaldb/db/hot_v1_311115391 because Input/output error' confkey='source::/opt/splunk/var/log/splunk/splunkd.log|host::hostname|splunkd|1456359' 02-09-2023 09:49:02.354 +1100 ERROR pipeline [1807 indexerPipe_1] - Uncaught exception in pipeline execution (indexer) - getting next event Anyone came cross this situation? How to fix it up? Thanks!
I would like to setup the alerts based upon the process running status in a server. We had a couple of process(services) which are running inside a server. To track the status of those process, I wo... See more...
I would like to setup the alerts based upon the process running status in a server. We had a couple of process(services) which are running inside a server. To track the status of those process, I would like to need to setup alerts.  Appreciate if anyone share the possibilities. Thanks in advance.
I am trying to create Splunk classic dashboard to show the metrics of important Splunk errors like crash logs , logs that cause Splunk performance issues . What are the Important Splunk error logs... See more...
I am trying to create Splunk classic dashboard to show the metrics of important Splunk errors like crash logs , logs that cause Splunk performance issues . What are the Important Splunk error logs that need to be monitored In this case and where to find them? 
As I write this I realize that what I want is likely not possible using this method.  I want a fillnull (or similar) to happen before an eval.  The eval is likely not even called if there are no even... See more...
As I write this I realize that what I want is likely not possible using this method.  I want a fillnull (or similar) to happen before an eval.  The eval is likely not even called if there are no events in the timechart span I am looking at.  I want the eval it to return a 1 when there are no events in that span.       This works, but is missing the eval.   index=main sourcetype=iis  cs_host="site1.mysite.com" | timechart span=10s  Max(time_taken) | fillnull value=1 This is what I am using.  It works, except for when no events happen. index=main sourcetype=iis  cs_host="site1.mysite.com" | eval site1_up=if(sc_status=200,1,0) | timechart span=10s  Max(site1_up) This charts a 1 if there was at least one 200 response from site1.mysite.com in the 10s span.  It charts a 0 if there were responses, but none were 200.  If there are no matching events it is probably not even looked at and returns nothing and the chart looks like a 0.  I want a 1 charted if there are no events in that 10s span.        Adding | fillnull value=200 sc_status after the timechart simply shows an extra column of sc_status at 200 in every span (column in the chart).  Putting this before the eval does not work since I believe nothing is done without an event.  It should also only use fillnull (or similar) if no events are in that 10 second span.   I have also tried | append [| makeresults ] without success, but don't completely know how that would work.    Logically this is what I want.  The reasoning for the up/down status is not important since this is simply an example.   For each 10s span in the timechart |eval Site1_up=1 if cs_host=A and at least one sc_status=200 |eval Site1_up=0 if cs_host=A and at no sc_status=200 |eval Site1_up=1 if there are no events matching cs_host=A |eval Site2_up =1 if cs_host=B and at least one cs_method=POST |eval Site2_up =0 if cs_host=B and at no cs_method=POST |eval Site2_up =1 if there are no events matching cs_host=B |eval Site3_up =1 if cs_host=C AND cs_User_Agent=Mozilla and at least one cs_uri_stem=check.asmx |eval Site3_up =0 if cs_host=C AND cs_User_Agent=Mozilla and no cs_uri_stem=check.asmx |eval Site3_up =1 if there are no events matching cs_host=C I am trying to make a chart of the up(1)/down(0) status of various components, some of which are determined by the IIS logs.  Thanks      
I have logs which contain parts like: .. { "profilesCount" : { "120000" : 100 , "120001" : 500 , "110105" : 200 , "totalProfilesCount" : 1057}} .. here the key is accountId and value is the number ... See more...
I have logs which contain parts like: .. { "profilesCount" : { "120000" : 100 , "120001" : 500 , "110105" : 200 , "totalProfilesCount" : 1057}} .. here the key is accountId and value is the number of profiles in it. when I use max_count=0 in rex and extract these values I get: accountId=[12000000, 12000001, 11001005] and pCount=[100, 500, 200] for this example event. Since these accountIds are not mapped to their corresponding pCount when I visualize them I get accountId pCount 12000000 100 500 200 12000001 100 500 200 11001005 100 500 200 how can I map them correctly and show in a table form? This was my search query: search <search_logic> | rex max_match=0 "\"(?<account>\d{8})\" : (?<pCount>\d+)"] | stats values(pCount) by account Thanks in advance
I have a lookup with a field called IP. The field has values that have multiple IPs in them an I would like to sperate them out each into their own field. Some IPs are separated by colons and some ar... See more...
I have a lookup with a field called IP. The field has values that have multiple IPs in them an I would like to sperate them out each into their own field. Some IPs are separated by colons and some are separated by semicolons, and some fields have 3+ IPs. Regardless, I need the IPs in the field beyond the first one to be in their own Field column named IP2 and IP3, etc.  What I have: IP 1.1.1.1,2.2.2.2  or  IP  1.1.1.1;2.2.2.2 I've tried something like the below but the makemv only seems to work for the "," and the seperated IPs still show up in the original IP field.  | makemv delim=";" allowempty=true IP | makemv delim="," allowempty=true IP | mvexpand IP
Hi Team, I have a requirement to build a metrics report with below conditions Similar report for 3 different teams (each should not access the other) Underlying data (within the index) may co... See more...
Hi Team, I have a requirement to build a metrics report with below conditions Similar report for 3 different teams (each should not access the other) Underlying data (within the index) may contain sensitive information. So, only the report should be accessible but not the entire index data Metrics of 3 different teams are present in the same index and sourcetype Should be flexible to include extra information (in future) within the report (for history as well) With these requirements, I thought of below solutions but could not meet all requirements. Embedded reports Runs only for specific scheduled time range. So, no flexibility in selecting different time ranges. Summary Indexing Need to create separate summary indexes (per team) and create a report/dashboard using summary index but adding extra information for history metrics is difficult (that's my perception, correct me if I am wrong!) Creating datasets We can create separate datasets with one single root search but not sure how access controls should be with datasets. Please enlighten! Do we have any other better solution please? OR Do you feel one among the above solution would be better to meet my requirements? Any suggestions are welcome!
Good afternoon, I'm looking for a way to track impossible travel events for users who are logging in to applications using Duo 2fa. Basically if a user gets a duo push from an IP, lets say in Ameri... See more...
Good afternoon, I'm looking for a way to track impossible travel events for users who are logging in to applications using Duo 2fa. Basically if a user gets a duo push from an IP, lets say in America, then another duo event in France within a short time period, this would be an event we want to investigate. Is it possible to do this using splunk queries?
This is very similar to a lot of XML parsing questions, however I have read through ~20 topics and am still unable to get my XML log to parse properly. Here is a sample of my XML file: <?xml vers... See more...
This is very similar to a lot of XML parsing questions, however I have read through ~20 topics and am still unable to get my XML log to parse properly. Here is a sample of my XML file: <?xml version="1.0" encoding="UTF-8"?><AuditMessage xmlns:xsi="XMLSchema-instance" xsi:noNamespaceSchemaLocation="HL7-audit-message-payload_1_3.xsd"><EventIdentification EventActionCode="R" EventDateTime="2022-11-07T04:18:01"></EventIdentification></AuditMessage> <?xml version="1.0" encoding="UTF-8"?><AuditMessage xmlns:xsi="XMLSchema-instance" xsi:noNamespaceSchemaLocation="HL7-audit-message-payload_1_3.xsd"><EventIdentification EventActionCode="E" EventDateTime="2022-11-07T05:18:01"></EventIdentification></AuditMessage> Here are the entire contents of my props.conf file:  [xxx:xxx:audit:xml] MUST_BREAK_AFTER = \</AuditMessage\> KV_MODE = xml LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = true TIMESTAMP_FIELDS = <EventDateTime> TIME_PREFIX = <EventDateTime> TIME_FORMAT = %Y-%m-%dT%H:%M:%S category = Custom disabled = false  I would need your assistance to parse the events. Thank you.
I am configuring a hostname validation TLS certificate with a self-signed certificate in splunk enterprise 9.0 and it seems to me that it cannot trust the CA ERROR X509Verify [TelemetryMetricBuffer... See more...
I am configuring a hostname validation TLS certificate with a self-signed certificate in splunk enterprise 9.0 and it seems to me that it cannot trust the CA ERROR X509Verify [TelemetryMetricBuffer] - Server X509 certificate (CN=,DC=,DC=,DC=) failed validation; error=19, reason="self signed certificate in certificate chain" the configuration is the following: [sslConfig] # turns on TLS certificate requirements sslVerifyServerCert = true # turns on TLS certificate host name validation sslVerifyServerName = true serverCert = <path to your server certificate> Do you know how I can tell splunk which CA I'm using so it can trust the certificate? or how can i configure it?
I have a Splunk query as below which pulls some events.   index="windows_events" TargetFileName="*startup*"     Now from the events I picked the below TargetFileName field value      \Device\... See more...
I have a Splunk query as below which pulls some events.   index="windows_events" TargetFileName="*startup*"     Now from the events I picked the below TargetFileName field value      \Device\HarddiskVolume3\Users\XYZ\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\Send to AbC.lnk     Now I wanted to search specifically for the above field and for that I used the below query which gives me no results.      `get_All_CrowdstrikeEDR` event_simpleName=FileCreateInfo os="Win" TargetFileName="*\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\*"     Now, what I dont understand is when I tried the first query I am able to see some events though I used wild cards before and after startup   Now, when I extended the wild card with actual value why isn't working?   Can't I use backslashes in Splunk searches?
I inherited a Splunk environment I was informed the other day that a computers.csv lookup is not generating any results, is there a way to find out what should be populating that file which is curren... See more...
I inherited a Splunk environment I was informed the other day that a computers.csv lookup is not generating any results, is there a way to find out what should be populating that file which is currently empty, I did find the App which houses the lookup csv 
Good afternoon I'm having trouble changing the color of the indices (numbers) that appear on top of the bars. I need to change the current color (black) to white. Can someone help me? Panel co... See more...
Good afternoon I'm having trouble changing the color of the indices (numbers) that appear on top of the bars. I need to change the current color (black) to white. Can someone help me? Panel code:   { "type": "viz.column", "title": "", "dataSources": { "primary": "ds_7YQhhskC" }, "options": { "foregroundColor": "#FFFFFF", "fontColor": "#FFFFFF", "fieldColors": { "Sum of amount": "#A870EF" }, "legend.placement": "top", "axisTitleX.text": "Days of the week", "axisTitleY.text": "Amount of transactions", "chart.showDataLabels": "all", "legend.labelStyle.overflowMode": "ellipsisNone", "yAxisVisibility": "show", "xAxisVisibility": "show", "backgroundColor": "transparent" }, "showProgressBar": false, "showLastUpdated": false, "context": {} }      
Hi,  I want to add a new Search Head to my existing 3 node SHC. My question is regarding the initialization step.   splunk init shcluster-config -auth <username>:<password> -mgmt_uri <URI>:<m... See more...
Hi,  I want to add a new Search Head to my existing 3 node SHC. My question is regarding the initialization step.   splunk init shcluster-config -auth <username>:<password> -mgmt_uri <URI>:<management_port> -replication_port <replication_port> -replication_factor <n> -conf_deploy_fetch_url <URL>:<management_port> -secret <security_key> -shcluster_label <label     -secret <security_key>   IF I look in the server.conf on an existing SHC member you can find the pass4SymmKey [shclustering] pass4SymmKey = $9$dkjajkldjaj-- But I have the original secret that was used to create the pass4SymmKey e.g. password1234 Which do I use?   And when I added the IDX cluster to the new SHC node, do I use the pass4SymmKey or the original secret? Thank you!
I am trialing splunk and have installed the splunk otel collector but nothing is appearing in the console, the access token shows 0 hosts tied to it.
Good day All! UF version 8.2.9 on a series of Linux machines. I've an application containing local/server.conf deploying to a series of Linux machines. The machines have a mixed configuration o... See more...
Good day All! UF version 8.2.9 on a series of Linux machines. I've an application containing local/server.conf deploying to a series of Linux machines. The machines have a mixed configuration of short and fqdn as the hostname. For consistence, want to use the short name. Each instance environment contains a variable called HOST_EXTERNAL which is the short name. The documentation states: * Can contain environment variables. * After any environment variables are expanded, the server name (if not an IPv6 address) can only contain letters, numbers, underscores, dots, and dashes. The server name must start with a letter, number, or an underscore. ERROR: serverName must start with a letter, number, or underscore. You have: $HOST_EXTERNAL ServerName is only set in the apps/app-name/local and system/default/server.conf.  system/default/server.conf:serverName=$HOSTNAME app-name/local/server.conf:serverName = $HOST_EXTERNAL Googling, doesn't produce any examples of using an environment variable other than $HOSTNAME. What am I missing on attempting to use $HOST_EXTERNAL as serverName in server.conf Thoughts?
I have an OpenCanary which is using a webhook to deliver data into my Splunk instance. It works really well but my regex is a bit rubbish and the field extraction is not going well.  The wizard is ... See more...
I have an OpenCanary which is using a webhook to deliver data into my Splunk instance. It works really well but my regex is a bit rubbish and the field extraction is not going well.  The wizard is getting me a reasonable way but the OpenCanary moves the log items around in the rows and this foxes the wizard which seems to see the repetition and resists my attempts to defeat it when I try to take the text after some labels (namely Port which works as it's in the same location per line, Username, Password and src_host. Two lines which should help with the understanding of my challenge. message="{\"dst_host\": \"10.0.0.117\", \"dst_port\": 23, \"local_time\": \"2023-02-08 16:20:12.113362\", \"local_time_adjusted\": \"2023-02-08 17:20:12.113390\", \"logdata\": {\"PASSWORD\": \"admin\", \"USERNAME\": \"Administrator\"}, \"logtype\": 6001, \"node_id\": \"hostname.domain\", \"src_host\": \"114.216.162.49\", \"src_port\": 47106, \"utc_time\": \"2023-02-08 16:20:12.113383\"}" path=/opencanary/APIKEY_SECRET full_path=/opencanary/APIKEY_SECRET query="" command=POST client_address=100.86.224.114 client_port=54770 message="{\"dst_host\": \"10.0.0.117\", \"dst_port\": 22, \"local_time\": \"2023-02-08 16:20:11.922514\", \"local_time_adjusted\": \"2023-02-08 17:20:11.922544\", \"logdata\": {\"LOCALVERSION\": \"SSH-2.0-OpenSSH_5.1p1 Debian-4\", \"PASSWORD\": \"abc123!\", \"REMOTEVERSION\": \"SSH-2.0-PUTTY\", \"USERNAME\": \"root\"}, \"logtype\": 4002, \"node_id\": \"hostname.domain\", \"src_host\": \"61.177.172.124\", \"src_port\": 17802, \"utc_time\": \"2023-02-08 16:20:11.922536\"}" path=/opencanary/APIKEY_SECRET full_path=/opencanary/APIKEY_SECRET query="" command=POST client_address=100.86.224.114 client_port=54768 Any regex experts will help me build out pivots and reporting for my OpenCanary which gets around 200'000 connection attempts every 7 days
Hi! I'm trying to export the CMC health overview dashboard as a pdf and, hopefully, set it to send as an email attachment on a regular schedule. I have seen this answer: https://community.splunk.... See more...
Hi! I'm trying to export the CMC health overview dashboard as a pdf and, hopefully, set it to send as an email attachment on a regular schedule. I have seen this answer: https://community.splunk.com/t5/Dashboards-Visualizations/Can-you-copy-a-dashboard-into-a-report/m-p/375881 &  this doc https://docs.splunk.com/Documentation/Splunk/9.0.3/Report/GeneratePDFsofyourreportsanddashboards#:~:text=To%20schedule%20dashboard%20PDF%20emails,in%20the%20Data%20Visualizations%20Manual. on how to accomplish that. But the export options are not visible within the CMC app. Is this possible within the CMC app?
Because of a typo we had the following in our query:     earliest=-1@d     Since Splunk query actually ran I assumed that some kind of default value had been used. I could not find such detail... See more...
Because of a typo we had the following in our query:     earliest=-1@d     Since Splunk query actually ran I assumed that some kind of default value had been used. I could not find such details in docs.