All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am working on a Splunk query and I need help adjusting my rex command to get two fields that are in one field into their own fields. Example below: index=test sourcetype=test category=test ... See more...
Hello, I am working on a Splunk query and I need help adjusting my rex command to get two fields that are in one field into their own fields. Example below: index=test sourcetype=test category=test | rex field=user "(?<region>[^\/]+)\/(?<username>[^\w].+)" | fillnull t | sort _time | table _time, username, user, region, sourcetype,  result, t | bin span=1d _time | dedup t The user field has: test\test1 and I need it to split that so username=test region=test1
I want to show statistics of daily volume and latest events for all the sourcetypes in single table, can you please help.
Does anyone know if the current Dynatrace add on will be updated to use the Dynatrace V2 API? We have a requirement to ingest some web app metrics from dynatrace that are not easily available to th... See more...
Does anyone know if the current Dynatrace add on will be updated to use the Dynatrace V2 API? We have a requirement to ingest some web app metrics from dynatrace that are not easily available to the V1 API and would also like to know that the add on will be functional if/when the V1 API is made redundant.  
Hi Community,   I have two separate Splunk installs: one is the 8.1.0 version and another one is 8.2.5 The older version is our production Splunk install. I can see a lag in the dashboard set-u... See more...
Hi Community,   I have two separate Splunk installs: one is the 8.1.0 version and another one is 8.2.5 The older version is our production Splunk install. I can see a lag in the dashboard set-up which calculates the difference between the index time and the actual time. Since its production environment, I assumed that the lag might be due to the below reasons. The universal forwarder is busy as it's doing a recursive search through all the files within the folders. This is done for almost 44 such folders. Example: [monitor:///net/mx41779vm/data/apps/Kernel_2.../*.log] The forwarder might be outdated to handle such loads. The version used is 6.3.3 Splunk install is busy waiting as there is already a lot of incoming data from other forwarders. In order to clarify the issue, I set up the same in another environment. This is a test environment which does not have a heavy load as in production but has the same settings with reduced memory. When I set up a completely new forwarder, and replicate the setup in the test environment, I still see the same lag. This is very confusing as to why it's happening? Could someone provide me with tips or guidance on how to work through this issue? Thanks in advance.   Regards, Pravin  
I want to create an alert that pops up when the events match at least 500 times the same source IP address, same destination address and different destination ports in 1 minute.  The search I've come... See more...
I want to create an alert that pops up when the events match at least 500 times the same source IP address, same destination address and different destination ports in 1 minute.  The search I've come up with so far is as follows, although I'm not sure it's what I really need:    index=net-fw (src_ip=172.16.0.0/12 OR src_ip=10.0.0.0/8 OR src_ip=192.168.0.0/16) AND (dest_ip=172.16.0.0/12 OR dest_ip=10.0.0.0/8 OR dest_ip=192.168.0.0/16) action IN (allowed blocked) | stats first(_time) as date dc(dest_port) as num_dest_port by src_ip, dest_ip | where num_dest_port >500 | convert ctime(date) as fecha   I think what I am missing to achieve is "with the same source IP and the same destination IP in one minute". Could someone help me with this problem? Thanks in advance and best regards.
Hello, I am developing an order to replay an alert. I'm not sure if it's a good idea to use the same method as the one used in the previous version of the program. For the replay after having det... See more...
Hello, I am developing an order to replay an alert. I'm not sure if it's a good idea to use the same method as the one used in the previous version of the program. For the replay after having determined the rules which will have to sound I pass by kwargs_block = {'dispatch.earliest_time':earliest, "dispatch.latest_time":latest, "trigger_actions":self.trigger} job = search.dispatch(**kwargs_block) Here is an example of a replay started at 11:52, but its scheduled task starts at 30 of each hour so I would like to have 11:30. Do you have any idea how to set the date of indexation of the alert?  
The objective is to display multiple modifications done by the Submitter, And to show number of modifications, respective filenames and hash names. Example : Submitter John did 15 modifications, 3... See more...
The objective is to display multiple modifications done by the Submitter, And to show number of modifications, respective filenames and hash names. Example : Submitter John did 15 modifications, 3 modification to file app.exe 2 modifications to gap.exe 10 modifications to rap.exe. So the display should show 15 hash files . And my SPL does the job. The SPL ends with   | stats values(risk_country) AS extreme_risk_country, list(flagged_threat) AS flagged_threat, list(times_submitted) AS times_submitted, list(md5_count) AS unique_md5, list(meaningful_name) AS file_name, list(md5_value) as md5 by submitter_id   I do see the results, but I am unable to easily eye-ball where the hash file of one filename ends and other one begins. especially when there are lots of hashes.  Please check the attachment of the output I am getting. I want to easily see/distinguish where one set of hashes finish for a file and other one starts. I am looking for suggestions to achieve it in some way to look it visually separate . Thank you.  
Hello Splunkers I have a query regarding number of indexers or indexer clusters that can reside in a single site clustering suppose i have 400 indexers  is there a limit as such for the number of... See more...
Hello Splunkers I have a query regarding number of indexers or indexer clusters that can reside in a single site clustering suppose i have 400 indexers  is there a limit as such for the number of indexers in single site?? and another question is how many indexers can i place in a indexer cluster can it be more than 3?
Sumologic Query:   _source="VerizonCDN" | json field=_raw "path" | json field=_raw "client_ip" | json field=_raw "referer" | where %referer = "" | where %status_code = 200 | json field=_raw "u... See more...
Sumologic Query:   _source="VerizonCDN" | json field=_raw "path" | json field=_raw "client_ip" | json field=_raw "referer" | where %referer = "" | where %status_code = 200 | json field=_raw "user_agent" | count by %host,%path,%client_ip,%referer,%user_agent | where _count >= 100 | order by _count desc   and my conversion to splunk:   source="http:Emerson_P1CDN" AND status_code=200 AND referer="" | stats count by host,path,client_ip,referer,user_agent | where count >= 100 | sort - count   Do think I convert it right? because the result of splunk was different from sumologic.
Is there a way to configure an external repository as the default one. I noticed that when I create a new playbook or modify an existing playbook from another remote repository, it always gets saved ... See more...
Is there a way to configure an external repository as the default one. I noticed that when I create a new playbook or modify an existing playbook from another remote repository, it always gets saved into the local repository. How do I change that behaviour to make another repository as the default repository? I am on SOAR on prem 5.1.0.
I need to get count of events by day by hour or half-hour using a field in splunk log which is a string whose value is date - e.g. eventPublishTime: 2022-05-05T02:20:40.994Z I tried some variations... See more...
I need to get count of events by day by hour or half-hour using a field in splunk log which is a string whose value is date - e.g. eventPublishTime: 2022-05-05T02:20:40.994Z I tried some variations of below query, but it doesn't work.  How should I formulate my query? index=our-applications env=prod | eval publishTime=strptime(eventPublishTime, "%Y-%m-%dT%H:%M:%SZ") | convert timeformat="%H:%M" ctime(publishTime) AS PublishHrMin | convert timeformat="%Y-%m-%d" ctime(_time) AS ReceiptDate | stats c(ReceiptDate) AS ReceiptDateCount by ReceiptDate, parentEventName,, PublishHrMin Thank you    
Hi, I have a custom Python script developed in Splunk where it will translate Chinese characters to English. The custom search was built following the guide below: https://dev.splunk.com/enterpri... See more...
Hi, I have a custom Python script developed in Splunk where it will translate Chinese characters to English. The custom search was built following the guide below: https://dev.splunk.com/enterprise/docs/devtools/customsearchcommands/ However, when we perform a search, the no. of Events does not tally with Statistics. For example, there are total of 8 events but only 1 in statistics. Sometimes it tallies, but most of the time it doesn't. Would like to know if this is a limitation within Splunk when using custom scripts or is there some configuration that is not taking place? Appreciate the help.
The below setup doesn't appear to index the script's output and I can't figure out why.  Even the basic one-liner example in their documentation (https://docs.splunk.com/Documentation/Splunk/latest/D... See more...
The below setup doesn't appear to index the script's output and I can't figure out why.  Even the basic one-liner example in their documentation (https://docs.splunk.com/Documentation/Splunk/latest/Data/MonitorWindowsdatawithPowerShellscripts) doesn't produce indexed events for me.  I've tried several variations on how the data is being formatted.  I know the script executes because the file change it makes is occurring. configureBINDIP.ps1 $launchConfFile = "C:\Program Files\SplunkUniversalForwarder\etc\splunk-launch.conf" $launchConfSetting = "SPLUNK_BINDIP=127.0.0.1" function CraftEvent ($message) { $event = [PSCustomObject]@{ "SplunkIndex" = "windows" "SplunkSource" = "powershell" "SplunkSourceType" = "Powershell:ConfigureBINDIP" "SplunkHost" = "mysplunkhost" "SplunkTime" = (New-TimeSpan -Start $(Get-Date -Date "01/01/1970") -End $(Get-Date)).TotalSeconds "Message" = $message } Return $event } if (-not (Test-Path $launchConfFile) ) { $event = [PSCustomObject]@{ "Message" = "Could not locate splunk-launch.conf: $launchConfFile" } Write-Output $event | Select-Object exit } if ( (Get-Content $launchConfFile ) -notcontains $launchConfSetting ) { $message = "Appending '$launchConfSetting' to '$launchConfFile'" "`r`n$launchConfSetting" | Out-File $launchConfFile -Append utf8 if ( (Get-Content $launchConfFile ) -contains $launchConfSetting ) { $message += ".... splunk-launch.conf update successful. Please remove this host from the app to restart." } else { $message += ".... splunk-launch.conf does not appear updated. Please continue to monitor." } } else { $message = "splunk-launch.conf already appears updated. Please remove this host from the app to restart." } $event = [PSCustomObject]@{ "Message" = $message } Write-Output $event | Select-Object inputs.conf [powershell://ConfigureBINDIP] script = . "$SplunkHome\etc\apps\configure_bindip\bin\configureBINDIP.ps1" index = windows source = powershell sourcetype = Powershell:ConfigureBINDIP  web.conf [settings] mgmtHostPort = 127.0.0.1:8089  
Our IIS logs contain a "time_taken" field which indicates the number of milliseconds each event took. I'd like to use the data from this field, along with the actual event _time (what I'm thinking of... See more...
Our IIS logs contain a "time_taken" field which indicates the number of milliseconds each event took. I'd like to use the data from this field, along with the actual event _time (what I'm thinking of as the time the server responded, or the "responseTime") to create a chart showing how many events were "in progress" over time. It's easy enough to calculate the "requestTime" by doing something like this:           eval requestTime = _time - time_taken What I'm missing is how to generate a graph with time (to the second) along the X-axis and total number of events in progress at that time on the Y-axis. For example, if a request was logged at 12:00:06 pm and had a time_taken of 3,000 ms (thus the "requestTime" was 12:00:03), then I would want it to be counted in 4 columns: 12:00:03, 12:00:04, 12:00:05, 12:00:06, indicating that this request was "in progress" during each of those times.  Essentially, I want something like the output of the command below, but I want it to be a count of all events in progress during each of those seconds rather than just a discreet count of events based on their "_time"           timechart count span=1s
We are on the Splunk Free license, which has a daily indexing limit of 500Mb. This has never before been a problem because we've had a pretty consistently stable +2Mb/day log volume. The total size o... See more...
We are on the Splunk Free license, which has a daily indexing limit of 500Mb. This has never before been a problem because we've had a pretty consistently stable +2Mb/day log volume. The total size of ALL of our logs, 150Mb, is far less than the daily limit. Yet somehow Splunk has complained and shut down our license. Does anyone have familiarity with this kind of error? Why would it trigger on such a small log database and low flow rate?
Today I noticed that one of the heavy forwarders in our distributed environment was not calling back to the deployment server, fetching config. Checking the logs on the HF I noticed: DC:Deploymen... See more...
Today I noticed that one of the heavy forwarders in our distributed environment was not calling back to the deployment server, fetching config. Checking the logs on the HF I noticed: DC:DeploymentClient [3909 MainThread] - target-broker clause is missing. DC:DeploymentClient [3909 MainThread] - DeploymentClient explicitly disabled through config. DS_DC_Common [3909 MainThread] - Deployment Client not initialized. DS_DC_Common [3909 MainThread] - Loading and initializing Deployment Server... DeploymentServer [3909 MainThread] - Attempting to reload entire DS; reason='init' DSManager [3909 MainThread] - No serverclasses configured. DSManager [3909 MainThread] - Loaded count=0 configured SCs I tried the  "splunk display deploy-client" telling me that the "Deployment Client is disabled." I am pretty sure this is why the HF is not phoneing home or fetching new config, though I cannot figure out why? The "deploymentclient.conf" file is identicall for all our HFs, stored in /etc/apps/xxx/default/deploymentclient.conf A grep-search for "target-broker" revealed no duplicate/hidden/conflicting files locally generated. Traffic is allowed as I am able to telnet to DS:8089. I have tried restarting splunk on the HF with no success, same "DC:DeploymentClient" problems. Why is this only affecting the one HF and not the others? How can I resolve this issue? Best regards // G
Hello, Is it possible to forward same data to different Splunk platforms / indexers clusters without using double license usage? Thanks  
I am trying to accomplish a few actions.  1. Move the stand alone server from one location to a different location. 2. Upgrade from old server w/unsupported  Splunk version to new server with the... See more...
I am trying to accomplish a few actions.  1. Move the stand alone server from one location to a different location. 2. Upgrade from old server w/unsupported  Splunk version to new server with the latest version all running in Windows. 3. Make sure the systems reporting to the old console report into the new one.  4. Eventually decommission old instance.  It was recommended to me to make the new instance the search head and connect to the old instance as a search peer, but then I started to see the following errors in the new instance:  "Problem replicating config (bundle) to search peer ' X.X.X.X:8089 ', Upload bundle="C:\Program Files\Splunk\var\run\hostname-1654707664.bundle" to peer name=hostname uri=https://X.X.X.X:8089 failed; http_status=409 http_description="Conflict". Then, I tried to . Point the Universal Forwarders to the new instance, adding the CLI commands, since no outputs.conf was in the old instance.  and got the following errors:  RED - the health status for the splunkd service.  RED - TCPOUTAutoLB-Q RED - TailReader-0 I really appreciate all the help.  Thank you,  
we are extracting fields with spaces in it using below transforms, Is there a way we can remove spaces in between fields from backend? There are 100's of fields with spaces. I tried with field alias ... See more...
we are extracting fields with spaces in it using below transforms, Is there a way we can remove spaces in between fields from backend? There are 100's of fields with spaces. I tried with field alias amd it's hard to apply for each and every field.  Transforms: [sourcetype] SOURCE_KEY=_raw REGEX=(?<field>[a-zA-Z ]+):(?<value>.+) FORMAT=$1:$2 _raw: "Process Create: Utc Time: 2022-04-28 22:08:22.025 Process Guid: {XYZ-bd56-5903-0000-0010e9d95e00} Process Id: 6228 Image: chrome.exe Command Line:  test" output I am getting: "Process Id" =  6228 Is there a way we can change this to ProcessId=6228 or Process-Id=6228 ? From UI i tried this, Can someone help me with backend trick | makeresults | eval _raw="Process Create:true Utc Time: 2022-04-28 22:08:22.025 Process Guid: {XYZ-bd56-5903-0000-0010e9d95e00} Process Id: 6228 Image: chrome.exe Command Line: test" | rex field=_raw max_match=0 "(?<field>[a-zA-Z ]+):(?<value>.+)" | rex mode=sed field=field "s/ /_/g" | eval tmp=mvzip(field,value,"=") | rename tmp as _raw | kv | table *
Hi All, I have logs which is from db_inputs/custom_script where owner not indexing custom time field as _time and they are importing all data every day without incremental.  So i need to find ass... See more...
Hi All, I have logs which is from db_inputs/custom_script where owner not indexing custom time field as _time and they are importing all data every day without incremental.  So i need to find assets which is last 7days with custom time field custom time field is last_found, 2020-07-06T17:42:29.322Z 2020-01-06T17:42:29.322Z 2020-01-05T17:42:29.322Z 2020-01-04T17:42:29.322Z from these date&time how can i search assets which is only last 7days from last_found custom time field. Please help on the query that would be great help.   Thanks!