All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

@somesoni2  I am trying to parse a complex xml and asking about the regex for SEDCMD-abremoveheader which refers to line 3 of the xml or further below.  (*s) works in regex101.com but not in Splun... See more...
@somesoni2  I am trying to parse a complex xml and asking about the regex for SEDCMD-abremoveheader which refers to line 3 of the xml or further below.  (*s) works in regex101.com but not in Splunk SEDCMD-abremoveheader.  The below solution is NOT SUFFICIENT because it refers to the first 2 lines only.   https://community.splunk.com/t5/Getting-Data-In/Parsing-XML-and-props-conf-help/m-p/158263 Details: I use SEDCMD-abremoveheader  to start from the desired location.  I works when it is about the first two lines.  It does not work when it is the 3rd or any other line further below.  Here is an example xml: <?xml version="1.0"?> <config version="8.1.0" daaa="dummy"> <something>   The following works fine in Splunk xml parsing: SEDCMD-abremoveheader = s/\<\?xml.*\s*\<config.*\>\s*//g The following does not work in splunk xml parsing and also not in regex101.com SEDCMD-abremoveheader = s/\<\?xml.*\s*\<somthing.*\>\s*//g The following ***works in regx101.com*** but not in Splunk xml parsing: (?s)\<\?xml.*\s*\<something.*\>\s* The (?s) says to ignore any char including new lines.  Also tried (?m) - does not work either.   
Hi Friends, i am using Splunk Enterprise  7.1.1.0. During installation My Ex-colleague had used his own user ID and now services are running using his ID. Since he left the Organization i need to rep... See more...
Hi Friends, i am using Splunk Enterprise  7.1.1.0. During installation My Ex-colleague had used his own user ID and now services are running using his ID. Since he left the Organization i need to replace the user ID with Service account. Please advise what are the places i need to conceder to check and replace the account? for example Splunk services , any database etc. Thank you !
Hi,   I have installed the appdynamics add on for splunk on a heavy forwarder which is sending events to Splunk Cloud. I am able to get the results of a analytics search query, but not all the resu... See more...
Hi,   I have installed the appdynamics add on for splunk on a heavy forwarder which is sending events to Splunk Cloud. I am able to get the results of a analytics search query, but not all the results/events. The query is returning 100+ events but on the cloud I only see 0-5 events. On another heavy forwarder with the same search query,  I am able to get the 100+ events on the splunk cloud. Setting thruput maxKBps to 0 in the limis.conf file does not seem to have any effect. Any ideas on how to debug or solve the issue is appreciated.   Regards, Edward  
Hello splunkers: Some data associations were included in the schedule report.  After running it, it was found that some data could not be associated with it, but it could be associated with it again... See more...
Hello splunkers: Some data associations were included in the schedule report.  After running it, it was found that some data could not be associated with it, but it could be associated with it again when manually running the report the next day. It was suspected that the search head task was too heavy when running the scheduled report, which led to the loss. Could you tell me how to troubleshoot it?   thanks in advance.
hey,   i got a search like: index = a | table timestamp,  id , name, age, message i want to display only the first 10 rows of the field message with an option to show the complete message field (... See more...
hey,   i got a search like: index = a | table timestamp,  id , name, age, message i want to display only the first 10 rows of the field message with an option to show the complete message field (i dont want to use substr - beacuse it's change the field content) thanks
Hi,  I'm trying to collect logs from a web servers, but getting an error on the FIrewall says "tcp-rst-from-server" on port 9997. Also, I have another error "tcp-rst-from-client" on port 8089. I ... See more...
Hi,  I'm trying to collect logs from a web servers, but getting an error on the FIrewall says "tcp-rst-from-server" on port 9997. Also, I have another error "tcp-rst-from-client" on port 8089. I have to say that there are other servers in the same VLAN that I'm getting logs from.  Where can I look to solve the problem?
Hi, ive got a task to do but im complete newbie in splunk. So could you guys help me? I have to send to splunk logs which have names like this " Crif.mc.Loader.log.2020-11-10" and every day it mak... See more...
Hi, ive got a task to do but im complete newbie in splunk. So could you guys help me? I have to send to splunk logs which have names like this " Crif.mc.Loader.log.2020-11-10" and every day it makes a new file like this. Inside the log it looks like this: :55:51,428 INFO LoaderLogger - subj_lien 2020-11-10 23:55:51,428 INFO LoaderLogger - subj_lien_debt 2020-11-10 23:55:51,428 INFO LoaderLogger - subj_lien_deposit So how can i easily send all these logs every day into splunk? Could you please write it in "splunkfordummies" style?
Hi We are on migration on 2 different environments for windows OS. Can we get details, where we have define new indexes in the indexer server?
Hi We are working on migration to different environments and we are looking to forward same data to different indexers(new indexer) but it is forwarding to only new indexer and it is not forwarding t... See more...
Hi We are working on migration to different environments and we are looking to forward same data to different indexers(new indexer) but it is forwarding to only new indexer and it is not forwarding to existing indexer. We did the below steps to forward data. Below is the outputs.conf file in Slunk universal forwarder [tcpout] defaultGroup = existingindexer,newindexer [tcpout:lb] server = existingindexer:9998 autoLB = true [tcpout: newindexer] server= server2.com:9998 autoLB = true And in the inputs.conf we kept both indexers name [script] interval = 3600 sourcetype = sqlrun index = old_index disabled = 0 [script] interval = 3600 sourcetype = sqlrun index = new_index disabled = 0
I have installed the Okta Identity Cloud Add-On into Splunk Cloud.   I have setup individual indexes for each of the log types and accounts that I am ingesting.  I can see that the Okta logs are bei... See more...
I have installed the Okta Identity Cloud Add-On into Splunk Cloud.   I have setup individual indexes for each of the log types and accounts that I am ingesting.  I can see that the Okta logs are being ingested correctly but none of the default searches work. By default they do a search on "sourcetype" however this returns no results: sourcetype="oktaim2:log" event_type="okta_event_authentication" (host="*") If I manually add the index to the search, then it works: index="okta-*" sourcetype="oktaim2:log" event_type="okta_event_authentication" (host="*") I can't find anywhere to set the default index.  If I do a plain search for sourcetype="oktaim2:log" it also returns no results. Is this an error when Splunk have installed the add-on, or a configuration I have missed, or do I just have to manually adjust all of the views ? Any thoughts appreciated.  
Using a simple example: count the number of events for each host name ... | timechart count BY host > ... | timechart count BY host > > This search produces this results table: > > _time ... See more...
Using a simple example: count the number of events for each host name ... | timechart count BY host > ... | timechart count BY host > > This search produces this results table: > > _time host1 host2 host3 > > 2018-07-05 1038 27 7 > 2018-07-06 4981 111 35 > 2018-07-07 5123 99 45 > 2018-07-08 5016 112 22 What I want is to add columns to show the total events for that day and the percentage of each http status code. > _time host1 host2 host3 host1_percent host2_percent > > 2018-07-05 1038 27 7 92.8 3.1 > 2018-07-06 4981 111 35 94.5 4.5 > 2018-07-07 5123 99 45 95.2  3.2 > 2018-07-08 5016 112 22    96.2          3.2   I tried ... | timechart count BY host | eval total= host1 + host2 + host3 | eval host1_percent = host1 / total | eval host2_percent = host2 / total | eval host3_percent = host3 / total | table _time, host1_percent, host2_percent, host3_percent This works most of the time, but I found out if for certain day, a host was offline (no record for a particular host), then the search doesn't work (return blank results), I have to remove that particular host from the "total = host1 + host2 + host3" to get it to work. So my question is: is there a way to get the total number of record for for every day (row) without having to add them together, e.g. replace the "total = host1 + host2 + host3" with a count or sum, I tried couple of thing, none of them work.
Hi, I was asked to create a simple dashboard as part of my task. I created a CSV Lookup as source and i want to display the values when i choose a specific "Reporting Date"  For example, I choose... See more...
Hi, I was asked to create a simple dashboard as part of my task. I created a CSV Lookup as source and i want to display the values when i choose a specific "Reporting Date"  For example, I choose in a drop down a Reporting Date of  Oct 23 2020.... then it should only display in the dashboard the values in the lookup being associated in that date. And if i choose another reporting date it should do the same. i only pasted a portion of my lookup since it has several columns.  i would like to get a simple idea on how before i get to put it in a dashboard. Hope someone can provide some insights into this. Thank you
I have a existing Splunk Enterprise installed in a server and I'm planning to create an instance of Dockerize Universal Forwarder to send logs to Splunk Enterprise. This dockerize UF will monitor a e... See more...
I have a existing Splunk Enterprise installed in a server and I'm planning to create an instance of Dockerize Universal Forwarder to send logs to Splunk Enterprise. This dockerize UF will monitor a existing dockerize application and I would like to know what are the things I need to add to my configure so that I am able to send logs from UF to Splunk Enterprise.
Lately my teammates have been searching for email logs and not finding them until several hours or even a day later.   I checked for latency issues by comparing log record times to _indextime but n... See more...
Lately my teammates have been searching for email logs and not finding them until several hours or even a day later.   I checked for latency issues by comparing log record times to _indextime but never found latency more than 20 minutes      index=proofpoint source=proofpoint_message_log | eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S") | eval delay=_indextime-_time | timechart span=1m max(delay)       so ingestion seems to have happened within 20 minutes of all records.  But my teammates cannot find some of the logs until hours later.   Has anyone here seen this sort of thing?  In the past I've only seen this as the result of delayed ingestion.  What else could cause it? We did recently move to SmartStore.  Is it possible that this is adding additional delay in Splunk determining which buckets to search?  
Hi Splunkers! I'm working on an usecase to show a weekly basis of root information of linux data. index=os user=root sourcetype=linux_secure  action=success app=sshd OR app=su | eval _time=relat... See more...
Hi Splunkers! I'm working on an usecase to show a weekly basis of root information of linux data. index=os user=root sourcetype=linux_secure  action=success app=sshd OR app=su | eval _time=relative_time(_time, "@w1") | timechart span=1w count by app Above query is actaully working but it's giving out other timelines as well, Which is something confusing to my team. I'm looking for just the weekly line chart with dates and no other timeframes shown in the chart. Help me in solving this case. Attaching the screenshot. Appreciate your help!
Hello everyone, I'm using the SPL to get credit card numbers on search time (I would like to maintain this on search time) :     | rex field=_raw "(?<firstStep_CC>(?:\d[ -]*?){13,30})" | eval s... See more...
Hello everyone, I'm using the SPL to get credit card numbers on search time (I would like to maintain this on search time) :     | rex field=_raw "(?<firstStep_CC>(?:\d[ -]*?){13,30})" | eval secondStep_CC=replace(firstStep_CC, "\-|\s|\.", "") | rex field=secondStep_CC "(?<creditcard>^4[0-9]{12}(?:[0-9]{3})?)|(^5[1-5][0-9]{14})|(^6(?:011|5[0-9][0-9])[0-9]{12})|(^3[47][0-9]{13})|(^3(?:0[0-5]|[68][0-9])[0-9]{11})|((?:2131|1800|35\d{3})\d{11})"     With this SPL I get possible credit card numbers, but invalid numbers as well. Do you need a way to get only valid numbers? I thought to split each digit from creditcard and do a mod10 verification, but I don't know if it's efficient. I read a lot of posts like these: https://community.splunk.com/t5/All-Apps-and-Add-ons/TA-luhn-How-to-use-the-luhn-command-and-how-to-apply-a-regex-to/m-p/221518 https://wiki.splunk.com/Community:Credit_card_masking_regex Tks!
In our setup we have a searchhead cluster with no search affinity (site0) and a multisite indexer clusters (site1/site2). Now its time for some expansion and although we already expanded the searchh... See more...
In our setup we have a searchhead cluster with no search affinity (site0) and a multisite indexer clusters (site1/site2). Now its time for some expansion and although we already expanded the searchhead cluster it is a first for the indexer cluster. Search Tier uses the cluster master (CM) to discover the indexers. Forwarding Tier uses the indexerDiscovery i.e. also uses the cluster master (CM) to discover the indexers. The process to spawn a new indexer is pretty much automated by now and from the  https://docs.splunk.com/Documentation/Splunk/8.0.4/Indexer/Addclusterpeer it is easy to understand why a rebalance may be required. Only thing that bothers me a bit is that from the Forums there is a general guidance to putt the CM in maintenance mode (https://community.splunk.com/t5/Deployment-Architecture/Adding-a-new-indexer-to-the-indexer-cluster/td-p/298199). Any idea why it is recommended to put the CM in maintenance? Afaik the maintenance only stops the bucket fix-up operations? There's any other hidden operation that maintenance mode does? What does maintenance mode makes for a better/safer procedure?
Hello, I am upgrading from the older Add-On for Windows defender to Microsoft 365 Defender Add-on for Splunk. The clientid, secret en tenant are all working fine in the old app. When I install the... See more...
Hello, I am upgrading from the older Add-On for Windows defender to Microsoft 365 Defender Add-on for Splunk. The clientid, secret en tenant are all working fine in the old app. When I install the new Microsoft 365 Defender Add-on for Splunk and use the same credentials I get the error: 2020-11-10 19:27:40,873 ERROR pid=77556 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS_Defender/bin/ta_ms_defender/aob_py2/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS_Defender/bin/microsoft_defender_atp_alerts.py", line 76, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-MS_Defender/bin/input_module_microsoft_defender_atp_alerts.py", line 54, in collect_events access_token = azauth.get_access_token(client_id, client_secret, authorization_server_url, resource, helper) File "/opt/splunk/etc/apps/TA-MS_Defender/bin/azure/auth.py", line 21, in get_access_token raise e KeyError: 'access_token' These Azure apps from Splunk are giving me a headache. I have the same with the Azure Add-On from Splunk. Why is Splunk making it so hard to upgrade reasonable straight forward apps?
Application log file display below at one of the line, looking for a regex that extract value of "0" / "1" / "2" or "3" in to a variables, which can be used later to draw a line chart Splunk item To... See more...
Application log file display below at one of the line, looking for a regex that extract value of "0" / "1" / "2" or "3" in to a variables, which can be used later to draw a line chart Splunk item Total: [ 0=233 ] or Splunk item Total: [ 1=220 ] or Splunk item Total: [ 1=220 3=40 ] or Splunk item Total: [ 0=50 1=210 3=30 ] or Splunk item Total: [ 0=100 1=205 2=10  3=5 ]
I have a data which is already indexed in Splunk through Universal Forwarder. So i want to send this data from Splunk to Kafka. Is it possible to send Splunk logs to Kafka? If yes. How can we send ... See more...
I have a data which is already indexed in Splunk through Universal Forwarder. So i want to send this data from Splunk to Kafka. Is it possible to send Splunk logs to Kafka? If yes. How can we send Splunk logs to Kafka?