All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello All,   I am new to splunk and trying to create an executive level dashboard for few of our enterprise applications. All these applications produces very large data like 100GB/day and my nor... See more...
Hello All,   I am new to splunk and trying to create an executive level dashboard for few of our enterprise applications. All these applications produces very large data like 100GB/day and my normal indexed searches are taking for ever and sometimes timing out as well. Need some guidance on how i could achieve faster/optimised searches    I tried using tstats but i am running into problems as the data is not totally structured and i am not able to do aggregate functions on response times as that data has string "ms" at the end or doesn't have KV type .  Like below examples.   Data 1 :  2022-09-11 22:00:59,998 INFO -(Success:true)-(Validation:true)-(GUID:68D74EBE-CE3B-7508-6028-CBE1DFA90F8A)-(REQ_RCVD:2022-09-11T22:00:59.051)-(RES_SENT:2022-09-11T22:00:59.989)-(SIZE:2 KB)-(RespSent_TT:0ms)-(Actual_TT:938ms)-(DB_TT:9ms)-(Total_TT:947ms)-(AppServer_TT:937ms)   Data 2:  09/27/2022 16:34:57:998|101.123.456.789|106|1|C97EC2DA10C64F64A83C87AEEC1CDDBE703A546E1B554AD1|POST|/api/v1/resources/ods-passthrough|200|97 Data 2 fields: date_time|client_ip|appid|clientid|guid|http_method|uri_path|http_status_code|response_time     Need help/suggestions on how to achieve faster search.
In syslog ng I didn’t want to read the data and store the data , how do you do that?
I would like to use props.conf and/or transforms.conf to parse data coming from a generic single line log file using regex to search for "Error" or "Notice" I did a test for my regex in regex 101, ... See more...
I would like to use props.conf and/or transforms.conf to parse data coming from a generic single line log file using regex to search for "Error" or "Notice" I did a test for my regex in regex 101, and the regex seems ok. regex = (?<=Error)(.*$) I do have a sourcetype for the incoming data - what should I be looking for and what files should I edit to allow this? Thanks, eholz1
  The company I worked for paid for splunk observability cloud.   We got a sku number and some entitlement document but we still don't have aaccess to https://app.us1.signalfx.com/#/home. ... See more...
  The company I worked for paid for splunk observability cloud.   We got a sku number and some entitlement document but we still don't have aaccess to https://app.us1.signalfx.com/#/home. - We had a trial before hand that they (our splunk team) recommend us to get so that we can learn about splunk observability cloud tool. We can still access the trial, but we can't access any new account that we paid for or the trial hasn't converted into a full account. (I don't know what is suppose to happen. There is no billing page anywhere).   We reached out yesturday with our splunk team after we got a sku from sales but didn't know how to apply it - They said they would look into the issue.   They looked into the issue and they told me to setup a splunk account - I already setup a splunk account with my own email (lets say bill@conglomo.org) and the it email (it@conglomo.org) and I didn't see anything about payment, billing, etc.   I gave them snapshot pictures of what I saw, showing them the splunk dashboard for both my account and the IT account. - They said that didn't setup an entitlement document for us yet.   So they setup and sent over an entitlement document - then they told me to login in again and see if I notice anything.   However NOTHING was different - both accounts show trial in the splunk observability and there is nothing new in the normal splunk dashboard webpage.     I ask them where I can specifically look for billing, entitlement, or the sku in the splunk dashboard or from the observability cloud website - they said call support.     I tried calling support twice - the automated phone hung up on me.     At this point, I just want to make a case number somewhere online where someone can just help me. Is that possible?  
Hi Team, We are planning to upgrade Splunk version from 8.2 to 9.0. During this process we have installed the python readiness app for up gradation and run the scan. All the apps and other config f... See more...
Hi Team, We are planning to upgrade Splunk version from 8.2 to 9.0. During this process we have installed the python readiness app for up gradation and run the scan. All the apps and other config found to be compatible with 9.0 but 2 of the system config failed. 1. MongoDB TLS and DNS validation check. 2. Search peer SSL config check. Please suggest how we can modify/upgrade the System config so that our environment will be ready for Splunk version 9.0. Thanks in advance!
Hi Team, I am running with Splunk 8.2.2 version and wanted to upgrade to Splunk 9.0 version. Please address the below queries as i am performing this activity for the first time. 1.  How to downl... See more...
Hi Team, I am running with Splunk 8.2.2 version and wanted to upgrade to Splunk 9.0 version. Please address the below queries as i am performing this activity for the first time. 1.  How to download the license version of Installable from Splunk website as it is only giving to download the product with 60 days validity. 2. What the pre and post steps required to perform the up gradation activity. 3. Please tell me which directory should we backup for any bad luck. Thanks in advance!
Hi all,  I have My K8s version 1.23.5 , / cluster-agent:21.2.0   Im getting this error. INFO]: 2022-09-27 17:36:53 - podmonitoring.go:92 - No pods to register [INFO]: 2022-09-27 17:36:53 - contain... See more...
Hi all,  I have My K8s version 1.23.5 , / cluster-agent:21.2.0   Im getting this error. INFO]: 2022-09-27 17:36:53 - podmonitoring.go:92 - No pods to register [INFO]: 2022-09-27 17:36:53 - containermonitoringmodule.go:455 - Either there are no containers discovered or none of the containers are due for registration [INFO]: 2022-09-27 17:36:53 - agentregistrationmodule.go:122 - Registering agent again [INFO]: 2022-09-27 17:36:53 - agentregistrationmodule.go:145 - Successfully registered agent again lan-k8s-desa [INFO]: 2022-09-27 17:37:33 - nodemetadata.go:47 - Metadata collected for 3 nodes Any help appriciated. thanks. 
How can i delete my post ?
Below is the search I am using.I am joining two indexes and then doing a differences between two timefields Last_Boot_Time ,log_time .But I am unable to get the difference .     index=preos hos... See more...
Below is the search I am using.I am joining two indexes and then doing a differences between two timefields Last_Boot_Time ,log_time .But I am unable to get the difference .     index=preos host=* | stats values(Boot_Time) as Last_Boot_Time values(SN) as SN VALUES(PN) AS PN VALUES(VBIS) AS NV_VBIS VALUES(NV) AS NV values(PCI) as PCI BY id host | fillnull value=clear | search SN!=clear PN!=clear NV_VBIS!=clear NV!=clear | fields host id Last_Boot_Time PCI NV_VBIS NV_DRIVER PN SN | sort Last_Boot_Time | join [search index=syslog | search Error_Code="***" | stats count by host _time PC Error_Code pid name Log_Message | eval log_time=strftime(_time,"%Y-%m-%d %H:%M:%S") | table log_time host PC Error_Code pid name Log_Message count | sort by -log_time | dedup pid]|eval diff=log_time-Last_Boot_Time     . Following are sample events for index=preos (here Boot time is considered as timestamp) 2022-09-22T13:20:38.713211-07:00 preo log-inventory.sh[24193]: Boot timestamp: 2022-09-22 13:09:59 Index=syslog 2022-09-22T11:51:34.272862-07:00 preo kernel: [74400.062429] NVRM: Xid (PCI:xxx): 119, pid=17993, name=che_mr_ent, Timeout waiting for on 76 (P_RM_CONTROL) (0x20800a4c 0x4). Thanks in Advance
Has anyone done a playbook for crowdstrike serves stopped? Basically querying splunk for host name, etc?  If so can you please share how you have done this?  
I've been given output of a query that makes use of the "last 30 days" time range, and need to know exactly what "last 30 days" means. The data is aggregated, so the output does not have a date field... See more...
I've been given output of a query that makes use of the "last 30 days" time range, and need to know exactly what "last 30 days" means. The data is aggregated, so the output does not have a date field in it. I do not have access to Splunk directly so I cannot run test queries.  For example, if the query is run on 8-31-2022, does "last 30 days" give me 8/2 to 8/31 or would it be 8/1 to 8/30?
Hello,  I got duplicated forwarders reported in Cloud Monitoring Console. It appears the same amount of forwarders in active and missing status. Please, some workaround ? Rebuilding the forwarder a... See more...
Hello,  I got duplicated forwarders reported in Cloud Monitoring Console. It appears the same amount of forwarders in active and missing status. Please, some workaround ? Rebuilding the forwarder asset could be and option ? Forwarders report to Splunk Cloud through a Heavy Forwarder instance (Splunk Cloud) Thanks!
Good afternoon! I have a problem setting up alerts. Most allerts, with the exception of one, are processed incorrectly. Alerts are processed by scheduled, last 1 minute. By wrong - I mean their false... See more...
Good afternoon! I have a problem setting up alerts. Most allerts, with the exception of one, are processed incorrectly. Alerts are processed by scheduled, last 1 minute. By wrong - I mean their false positives, that is, they are constantly triggered by scheduled, even if the request conditions are not met during this period of time.   Despite the fact that requests in alerts work out correctly - as we need, I am convinced that the problem is in the syntax, since the settings for the correct alert and the problematic alerts are the same.   Examples:   Alert that works fine: index="main" sourcetype="testsystem-script11" | transaction maxpause=10m srcMsgId Correlation_srcMsgId messageId | table _time srcMsgId Correlation_srcMsgId messageId duration eventcount | fields _time srcMsgId Correlation_srcMsgId messageId duration eventcount | sort srcMsgId _time | streamstats current=f window=1 values(_time) as prevTime by subject | eval timeDiff=_time-prevTime | delta _time as timeDiff | where (timeDiff)>1   An example of a problematic alert (I thought that the problem was in Cyrillic characters, but I tried without them, it does not help):   index="main" sourcetype="testsystem-script99" resultcode>0 | eval srcMsgId_Исх_Сообщения=if(len('Correlation_srcMsgId')==0 OR isnull('Correlation_srcMsgId'),'srcMsgId','Correlation_srcMsgId') | eval timeValue='eventTime' | eval time=strptime(timeValue,"%Y-%m-%dT%H:%M:%S.%3N%Z") | sort -eventTime | streamstats values(time) current=f window=1 as STERAM_RESULT global=false by srcMsgId_Исх_Сообщения | eval diff=STERAM_RESULT-time | stats list(diff) as TIME_DIF list(eventTime) as eventTime list(srcMsgId) as srcMsgId_Бизнес_Сообщения list(routepointID) as routepointID count as Кол_Сообщений by srcMsgId_Исх_Сообщения
I want to filter the search results based on tx_id that I extract in the 2nd rex. Meaning only those results that have the transaction_id same as the tx_id. I tried the where clause but it doesn't wo... See more...
I want to filter the search results based on tx_id that I extract in the 2nd rex. Meaning only those results that have the transaction_id same as the tx_id. I tried the where clause but it doesn't work     {search_results} | rex field= MESSAGE "(?<JSON>\{.*\})" | rex field= MESSAGE "Published Event for txn_id (?<tx_id>\w+)"       I tried this :     {search_results} | rex field= MESSAGE "(?<JSON>\{.*\})" | rex field= MESSAGE "Published Event for txn_id (?<tx_id>\w+)" | where JSON.transaction_id in (tx_id)      
Say, we have events like this: _time fw src_ip dest_ip dest_port fw_rule_action 8/1/22 1:30:00.000 AM fw1 192.168.50.51 8.8.8.8 53 block 1/1/22 1:30:00.000 AM ... See more...
Say, we have events like this: _time fw src_ip dest_ip dest_port fw_rule_action 8/1/22 1:30:00.000 AM fw1 192.168.50.51 8.8.8.8 53 block 1/1/22 1:30:00.000 AM fw1 192.168.50.51 8.8.8.8 53 permit 12/31/21 1:30:00.000 AM fw1 192.168.50.51 8.8.8.8 53 permit   We want to find the events that changed based on fw_rule_action. The real world scenario can be that you consolidated the (whatever) rule base and after application, you want to see, if some events are permitted that were blocked in the past and vice visa. What is the right approach to find the (in this example) block events? Is creating a baseline the right way?
root@ubuntu-linux-22-04-desktop:/opt/splunk/bin# uname -a Linux ubuntu-linux-22-04-desktop 5.15.0-48-generic #54-Ubuntu SMP Fri Aug 26 13:31:33 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux   sudo ... See more...
root@ubuntu-linux-22-04-desktop:/opt/splunk/bin# uname -a Linux ubuntu-linux-22-04-desktop 5.15.0-48-generic #54-Ubuntu SMP Fri Aug 26 13:31:33 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux   sudo tar xvzf splunk-9.0.1-82c987350fde-Linux-x86_64.tgz -C /opt root@ubuntu-linux-22-04-desktop:/opt/splunk/bin# sudo ./splunk start --accept-license ./splunk: 1: Syntax error: Unterminated quoted string root@ubuntu-linux-22-04-desktop:/opt/splunk/bin#   Error on starting Splunk on UBUNTU (using Parallels Desktop on my apple Mac pro)  Model Name: MacBook Pro   Model Identifier: MacBookPro18,1   Chip: Apple M1 Pro   Total Number of Cores: 10 (8 performance and 2 efficiency)   Memory: 16 GB      
Hi all - I am having trouble pulling out mv fields into separate events. My data looks like this: I'd like to pull each event out into it's own line, but I'm having trouble with the carriage ... See more...
Hi all - I am having trouble pulling out mv fields into separate events. My data looks like this: I'd like to pull each event out into it's own line, but I'm having trouble with the carriage returns and getting the fields to pair correctly (i.e., error 1232 is with server 1).  Example search:       | makeresults | eval error="1232 2345 5783 5689 2345 5678 5901", server="server1 server2 server3 server4 server6 server9 server7" | makemv delim=" " error | makemv delim=" " server | eval uniquekey=mvzip(server,error, ":")         How do I separate these fields into their own events so the data looks like: 1232 server1 server1:1232 2345 server2 server2:2345 5783 server3 server3:5783
Good morning, Curious to see if anyone has used a similar dataset in Splunk and/or any suggestions on the best way to create a usable solution. I have a list of IP addresses, and for each IP addr... See more...
Good morning, Curious to see if anyone has used a similar dataset in Splunk and/or any suggestions on the best way to create a usable solution. I have a list of IP addresses, and for each IP address there is a list of allowable systems (IPs) . If any of the IP addresses communicate with systems outside of the allowable list I want to be alerted. I know I can probably create individual alerts for each of these but would like to be able to process these in bulk. For example, if Splunk could periodically cross reference the IP list against the network data to see if there are any violations. Could a lookup table be used for this?
I have followed instructions in https://dev.splunk.com/enterprise/docs/devtools/python/sdk-python/howtousesplunkpython/howtocreatemodpy/#To-create-modular-inputs-programmatically Further I am follo... See more...
I have followed instructions in https://dev.splunk.com/enterprise/docs/devtools/python/sdk-python/howtousesplunkpython/howtocreatemodpy/#To-create-modular-inputs-programmatically Further I am following the You Tube tutorial https://www.youtube.com/watch?v=-M3MWJGdNJE&t=1029s&ab_channel=Splunk%26MachineLearning I have installed Splunk Enterprise on my computer. It is Windows 10. I created a folder with the required structures.  In it is the inputs.conf.spec with the following: [tmdb_input:://<name>] api_key = <value> lang = <value> page_number = <value> region = <value> I have a tmdb_input.py file in the bin directory.  When I start up Splunk, the app "TMDB Input" is there. But when I go to Settings->Data Input, the input for my app is not present.  I have looked at the log files in "C:\Program Files\Splunk\var\log\splunk" but I cannot find anything that helps me to find the problem.  I have searched for ERRORS and any reference to my tmdb_input app in the logs. But nothing helps. Does anyone have any insight as to how I might troubleshoot the failure to see the "TMDB Input" in  Settings->Data Input?      
I have 2 fields: the values of fieldA are present in fieldB and I need to remove the first part of fieldB up to the values present in fieldA. For example: fieldA = BP498 fieldB = "A1John/Doe Sm... See more...
I have 2 fields: the values of fieldA are present in fieldB and I need to remove the first part of fieldB up to the values present in fieldA. For example: fieldA = BP498 fieldB = "A1John/Doe SmithBP498 XX XX XX" Desired: fieldB="BP498 XX XX XX" Thanks in advance.