All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

The below setup doesn't appear to index the script's output and I can't figure out why.  Even the basic one-liner example in their documentation (https://docs.splunk.com/Documentation/Splunk/latest/D... See more...
The below setup doesn't appear to index the script's output and I can't figure out why.  Even the basic one-liner example in their documentation (https://docs.splunk.com/Documentation/Splunk/latest/Data/MonitorWindowsdatawithPowerShellscripts) doesn't produce indexed events for me.  I've tried several variations on how the data is being formatted.  I know the script executes because the file change it makes is occurring. configureBINDIP.ps1 $launchConfFile = "C:\Program Files\SplunkUniversalForwarder\etc\splunk-launch.conf" $launchConfSetting = "SPLUNK_BINDIP=127.0.0.1" function CraftEvent ($message) { $event = [PSCustomObject]@{ "SplunkIndex" = "windows" "SplunkSource" = "powershell" "SplunkSourceType" = "Powershell:ConfigureBINDIP" "SplunkHost" = "mysplunkhost" "SplunkTime" = (New-TimeSpan -Start $(Get-Date -Date "01/01/1970") -End $(Get-Date)).TotalSeconds "Message" = $message } Return $event } if (-not (Test-Path $launchConfFile) ) { $event = [PSCustomObject]@{ "Message" = "Could not locate splunk-launch.conf: $launchConfFile" } Write-Output $event | Select-Object exit } if ( (Get-Content $launchConfFile ) -notcontains $launchConfSetting ) { $message = "Appending '$launchConfSetting' to '$launchConfFile'" "`r`n$launchConfSetting" | Out-File $launchConfFile -Append utf8 if ( (Get-Content $launchConfFile ) -contains $launchConfSetting ) { $message += ".... splunk-launch.conf update successful. Please remove this host from the app to restart." } else { $message += ".... splunk-launch.conf does not appear updated. Please continue to monitor." } } else { $message = "splunk-launch.conf already appears updated. Please remove this host from the app to restart." } $event = [PSCustomObject]@{ "Message" = $message } Write-Output $event | Select-Object inputs.conf [powershell://ConfigureBINDIP] script = . "$SplunkHome\etc\apps\configure_bindip\bin\configureBINDIP.ps1" index = windows source = powershell sourcetype = Powershell:ConfigureBINDIP  web.conf [settings] mgmtHostPort = 127.0.0.1:8089  
Our IIS logs contain a "time_taken" field which indicates the number of milliseconds each event took. I'd like to use the data from this field, along with the actual event _time (what I'm thinking of... See more...
Our IIS logs contain a "time_taken" field which indicates the number of milliseconds each event took. I'd like to use the data from this field, along with the actual event _time (what I'm thinking of as the time the server responded, or the "responseTime") to create a chart showing how many events were "in progress" over time. It's easy enough to calculate the "requestTime" by doing something like this:           eval requestTime = _time - time_taken What I'm missing is how to generate a graph with time (to the second) along the X-axis and total number of events in progress at that time on the Y-axis. For example, if a request was logged at 12:00:06 pm and had a time_taken of 3,000 ms (thus the "requestTime" was 12:00:03), then I would want it to be counted in 4 columns: 12:00:03, 12:00:04, 12:00:05, 12:00:06, indicating that this request was "in progress" during each of those times.  Essentially, I want something like the output of the command below, but I want it to be a count of all events in progress during each of those seconds rather than just a discreet count of events based on their "_time"           timechart count span=1s
We are on the Splunk Free license, which has a daily indexing limit of 500Mb. This has never before been a problem because we've had a pretty consistently stable +2Mb/day log volume. The total size o... See more...
We are on the Splunk Free license, which has a daily indexing limit of 500Mb. This has never before been a problem because we've had a pretty consistently stable +2Mb/day log volume. The total size of ALL of our logs, 150Mb, is far less than the daily limit. Yet somehow Splunk has complained and shut down our license. Does anyone have familiarity with this kind of error? Why would it trigger on such a small log database and low flow rate?
Today I noticed that one of the heavy forwarders in our distributed environment was not calling back to the deployment server, fetching config. Checking the logs on the HF I noticed: DC:Deploymen... See more...
Today I noticed that one of the heavy forwarders in our distributed environment was not calling back to the deployment server, fetching config. Checking the logs on the HF I noticed: DC:DeploymentClient [3909 MainThread] - target-broker clause is missing. DC:DeploymentClient [3909 MainThread] - DeploymentClient explicitly disabled through config. DS_DC_Common [3909 MainThread] - Deployment Client not initialized. DS_DC_Common [3909 MainThread] - Loading and initializing Deployment Server... DeploymentServer [3909 MainThread] - Attempting to reload entire DS; reason='init' DSManager [3909 MainThread] - No serverclasses configured. DSManager [3909 MainThread] - Loaded count=0 configured SCs I tried the  "splunk display deploy-client" telling me that the "Deployment Client is disabled." I am pretty sure this is why the HF is not phoneing home or fetching new config, though I cannot figure out why? The "deploymentclient.conf" file is identicall for all our HFs, stored in /etc/apps/xxx/default/deploymentclient.conf A grep-search for "target-broker" revealed no duplicate/hidden/conflicting files locally generated. Traffic is allowed as I am able to telnet to DS:8089. I have tried restarting splunk on the HF with no success, same "DC:DeploymentClient" problems. Why is this only affecting the one HF and not the others? How can I resolve this issue? Best regards // G
Hello, Is it possible to forward same data to different Splunk platforms / indexers clusters without using double license usage? Thanks  
I am trying to accomplish a few actions.  1. Move the stand alone server from one location to a different location. 2. Upgrade from old server w/unsupported  Splunk version to new server with the... See more...
I am trying to accomplish a few actions.  1. Move the stand alone server from one location to a different location. 2. Upgrade from old server w/unsupported  Splunk version to new server with the latest version all running in Windows. 3. Make sure the systems reporting to the old console report into the new one.  4. Eventually decommission old instance.  It was recommended to me to make the new instance the search head and connect to the old instance as a search peer, but then I started to see the following errors in the new instance:  "Problem replicating config (bundle) to search peer ' X.X.X.X:8089 ', Upload bundle="C:\Program Files\Splunk\var\run\hostname-1654707664.bundle" to peer name=hostname uri=https://X.X.X.X:8089 failed; http_status=409 http_description="Conflict". Then, I tried to . Point the Universal Forwarders to the new instance, adding the CLI commands, since no outputs.conf was in the old instance.  and got the following errors:  RED - the health status for the splunkd service.  RED - TCPOUTAutoLB-Q RED - TailReader-0 I really appreciate all the help.  Thank you,  
we are extracting fields with spaces in it using below transforms, Is there a way we can remove spaces in between fields from backend? There are 100's of fields with spaces. I tried with field alias ... See more...
we are extracting fields with spaces in it using below transforms, Is there a way we can remove spaces in between fields from backend? There are 100's of fields with spaces. I tried with field alias amd it's hard to apply for each and every field.  Transforms: [sourcetype] SOURCE_KEY=_raw REGEX=(?<field>[a-zA-Z ]+):(?<value>.+) FORMAT=$1:$2 _raw: "Process Create: Utc Time: 2022-04-28 22:08:22.025 Process Guid: {XYZ-bd56-5903-0000-0010e9d95e00} Process Id: 6228 Image: chrome.exe Command Line:  test" output I am getting: "Process Id" =  6228 Is there a way we can change this to ProcessId=6228 or Process-Id=6228 ? From UI i tried this, Can someone help me with backend trick | makeresults | eval _raw="Process Create:true Utc Time: 2022-04-28 22:08:22.025 Process Guid: {XYZ-bd56-5903-0000-0010e9d95e00} Process Id: 6228 Image: chrome.exe Command Line: test" | rex field=_raw max_match=0 "(?<field>[a-zA-Z ]+):(?<value>.+)" | rex mode=sed field=field "s/ /_/g" | eval tmp=mvzip(field,value,"=") | rename tmp as _raw | kv | table *
Hi All, I have logs which is from db_inputs/custom_script where owner not indexing custom time field as _time and they are importing all data every day without incremental.  So i need to find ass... See more...
Hi All, I have logs which is from db_inputs/custom_script where owner not indexing custom time field as _time and they are importing all data every day without incremental.  So i need to find assets which is last 7days with custom time field custom time field is last_found, 2020-07-06T17:42:29.322Z 2020-01-06T17:42:29.322Z 2020-01-05T17:42:29.322Z 2020-01-04T17:42:29.322Z from these date&time how can i search assets which is only last 7days from last_found custom time field. Please help on the query that would be great help.   Thanks!
For context, I'm creating a dashboard where a user can search activity of all hosts in an environment or one host in that same environment. Unfortunately, the naming convention used for hostnames mak... See more...
For context, I'm creating a dashboard where a user can search activity of all hosts in an environment or one host in that same environment. Unfortunately, the naming convention used for hostnames makes searching all hosts in a specific environment a bit more complicated than using a single field/value pair with a wildcard. For example, searching all non-production hosts would require a search similar to the following in my case:   index=servers host!="*prd*" AND (host="*30*" OR host="*40*")   In the dashboard, I'd like the user to be able to select a single hostname from a dropdown, or an "All Servers" option from the dropdown. With that being said, is there a way I can map all the hostnames to a single "field value" such that something like...   index=servers host=allhosts    ...would accomplish the same thing as my initial search example? This would be helpful as it would allow me to use a token for the host field when a user selects an option from the hosts dropdown.
Working on migrating from a RHEL 6 VM running splunk 8.0.5 to a RHEL 8 VM with splunk latest 8.2.6 (no clustering) Read and followed the installation and migration docs and I've been able to test wit... See more...
Working on migrating from a RHEL 6 VM running splunk 8.0.5 to a RHEL 8 VM with splunk latest 8.2.6 (no clustering) Read and followed the installation and migration docs and I've been able to test with some old data that its working. But another thing I'd like to do is optimize the indexes better as well and put them on new VM disks and distribute between hot/warm and cold/frozen,  the problem is our indexes are pretty big.  My understanding is that in order to move/migrate the indexes, I'll need to stop splunk on the old host and copy/rsync the directories over, then modify indexes on the new host and start splunk on it (of course with the required DNS pointing and forwarders reconfig to point to the new host). But the volumes I have are about 3TB, 3TB and a large one 25TB. I tested rsync with some data directories and it looks like it would take several days. For the large volume I don't think I have any other choice but to remove the disks from the old VM and attach to the new one. But even for the smaller 2 volumes it looks like it will take almost 24 hours to copy over 5-6 TB and I don't think I can keep splunk stopped for that long, it would definitely loose data.  Am I understanding this correctly or is there a better and/or quicker way to do this?  The reason I wanted to use new VM disks is because the old host has several VM disks combined to make each of the OS volumes and its just messy (e.g. the 25TB mount point has 3 underlying disks) plus with new disks I can also distribute the indexes and hot/cold buckets better between fast and not-so-fast storage. Would really appreciate if anyone can provide any suggestions/advice.
  <single> <search> <query>|query</query> <earliest>$time_token2.earliest$</earliest> <latest>$time_token2.latest$</latest> </search> <option name="drilldown">all</option> <option name="refresh.di... See more...
  <single> <search> <query>|query</query> <earliest>$time_token2.earliest$</earliest> <latest>$time_token2.latest$</latest> </search> <option name="drilldown">all</option> <option name="refresh.display">progressbar</option> <drilldown> <set token="hide_details">$click.name$</set> <unset token="form.hide_details"></unset> </single> <input type="checkbox" token="hide_details"> <label></label> <choice value="hide">Hide Details</choice> <change> <condition value="hide"> <unset token="hide_details"></unset> </condition> </change> <delimiter> </delimiter> </input> </drilldown>  
I am trying to ingest cyberark EPM logs to splunk cloud and found doc related to it. https://docs.splunk.com/Documentation/AddOns/released/CyberArkEPM/About but it does not mention the configurat... See more...
I am trying to ingest cyberark EPM logs to splunk cloud and found doc related to it. https://docs.splunk.com/Documentation/AddOns/released/CyberArkEPM/About but it does not mention the configuration from the Cyberark side. Can someone help me with the configuration steps from the cyberark side?  The Splunk Add-on for CyberArk EPM asks for username and password which i don't know how to get the details from. Thanks in advance!!
As I understand es_notable_events is KVStore and it stores notable event information for last 48 hours/ also there is a panel in ES Audit dashboard that shows Notable Events By Owner - Last 48 Hours.... See more...
As I understand es_notable_events is KVStore and it stores notable event information for last 48 hours/ also there is a panel in ES Audit dashboard that shows Notable Events By Owner - Last 48 Hours.   Is there any way to build a similar chart for older Notables?   Thanks, Deovrat
hello team please i need solution to these question i have three column fields, startDate,endDate, ARTstartDate. i want to count the number of ARTstartDate that falls inbetween the two dates (i.e... See more...
hello team please i need solution to these question i have three column fields, startDate,endDate, ARTstartDate. i want to count the number of ARTstartDate that falls inbetween the two dates (i.e from startDate to endDate)  
Hey everyone and I hope your having a great day! I have configured a custom field extraction in the Splunk search app for my sourcetype but I don't have the possibility to share them with other user... See more...
Hey everyone and I hope your having a great day! I have configured a custom field extraction in the Splunk search app for my sourcetype but I don't have the possibility to share them with other users like I can do with another Splunk instance where I have the role Power (With Power role, I can share it no problem). I don't want to assign myself the Power role since it's broad and wouldn't follow the rule of least privilege. For this reason which permission would I need to assign myself in order to be able to share my data extraction with other users?
hi,  it is possible transforms this table column a column b column c column d column e column f column g aaa bbb ccc ddd eee fff ggg to  column... See more...
hi,  it is possible transforms this table column a column b column c column d column e column f column g aaa bbb ccc ddd eee fff ggg to  column a column b column c column d name value aaa bbb ccc ddd column e eee aaa bbb ccc ddd column f fff aaa bbb ccc ddd column g ggg   Simone
Splunk Connect for Zoom (Version 1.0.1, April 23, 2020, https://splunkbase.splunk.com/app/4961/) Disable the use of TLSv1.0 protocol in favor of a cryptographically stronger protocol such as TLSv1.2 ... See more...
Splunk Connect for Zoom (Version 1.0.1, April 23, 2020, https://splunkbase.splunk.com/app/4961/) Disable the use of TLSv1.0 protocol in favor of a cryptographically stronger protocol such as TLSv1.2 Finding from Qualys QID-38628 on tcp port 4443 The following openssl commands can be used to do a manual test: openssl s_client -connect ip:port -tls1 If the test is successful, then the target support TLSv1
Can the deployment server be used to deploy apps/configuration to a search head and also transforms/props/configuration/apps to indexers? To date I'm only using it to delivery apps to universal forwa... See more...
Can the deployment server be used to deploy apps/configuration to a search head and also transforms/props/configuration/apps to indexers? To date I'm only using it to delivery apps to universal forwarders.   Note: using Splunk Enterprise 8.2.5 on Windows. 
  I need a list of only those jobName which start with letter a though m - anycase. The below does not work index=log-13120-nonprod-c laas_appId=qbmp.prediction-engine sourcetype="qbmp.predicti... See more...
  I need a list of only those jobName which start with letter a though m - anycase. The below does not work index=log-13120-nonprod-c laas_appId=qbmp.prediction-engine sourcetype="qbmp.prediction-engine:app" "predicted as Prediction" | table jobName | dedup jobName | where jobName like "[a-m]%"     sample event is like this below     08-06-2022 10:19:36.990 [task-53] INFO c.m.b.p.service.PredictionWorkerV2#run - predictionId=1e5a96c6-5f90-4bf9-b0df-7f3528ae642b, threadId=23, job=SRW-REPAPER-LoadedStatus^QNA predicted as Prediction{predictionId='1e5a96c6-5f90-4bf9-b0df-7f3528ae642b', jobName='SRW-REPAPER-LoadedStatus', instance='QNA', predictionStatus='cant_predict', predictedStartTime=-1, predictedFinishTime=-1, predictionExplanation='no_jobstats', predictedAt=1654697976}     The above event has jobName='SRW-REPAPER-LoadedStatus' and it does not start with a letter from a through m. So it should not be displayed.
What is the is the best approach to creating a field that shows the number of incomplete requests in a given period of time?   For the machine in question, events are logged when it completes the R... See more...
What is the is the best approach to creating a field that shows the number of incomplete requests in a given period of time?   For the machine in question, events are logged when it completes the Request-Response Loop.    I have a field `time_taken` which shows, in milliseconds, how long the Request-Response Loop has taken.  I have already done the following, now how do I evaluate the total number of `open_requests`  for each second?   | eval responded = _time | eval requested = _time - time_taken | eval responded = strftime(responded ,"%Y/%m/%d %H:%M:%S") | eval requested = strftime(requested ,"%Y/%m/%d %H:%M:%S") | eval open_requests = ??? | table _time open_requests | sort - _time