All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Came across an interesting behaviour with collect today depending on whether you specify a sourcetype or not. If you have a field containing a \ character it will escape the \ when using a sourcetype... See more...
Came across an interesting behaviour with collect today depending on whether you specify a sourcetype or not. If you have a field containing a \ character it will escape the \ when using a sourcetype, but not with stash. These two searches   | makeresults | eval field="App\X" | collect index=main sourcetype="something_other_than_stash"       | makeresults | eval field="App\X" | collect index=main   will generate two different field values in index for 'field' When using the first with a sourcetype, the resultant field has two \\ characters in the field value in the index. Both examples show the raw event as App\\X, but fieldsummary shows the one including sourcetype to be App\\\\X Anyone know why this is?
Hi Everyone, I am using below query: index=abc ns=uio app_nameapi "ARC EVENT RECEIVED FROM SOURCE"| rex "RID:(?<RID>(\w+-){4}\w+)-(?<sourceagent>\w+-\w+)"|timechart count(RID) as "RID" by sourceage... See more...
Hi Everyone, I am using below query: index=abc ns=uio app_nameapi "ARC EVENT RECEIVED FROM SOURCE"| rex "RID:(?<RID>(\w+-){4}\w+)-(?<sourceagent>\w+-\w+)"|timechart count(RID) as "RID" by sourceagent when I put it in line chart I am getting dates like this: Tue March 2 Wed March 3 I want it all in proper date format. In statics its coming in date format but when I put in line chart . Not getting proper dates. Can someone guide me on this.
Installed DB Agent and it started successfully from linux. Installed Java Agent and it is working perfectly fine. However, getting issue  when I see the status of the collector in the controller "Ac... See more...
Installed DB Agent and it started successfully from linux. Installed Java Agent and it is working perfectly fine. However, getting issue  when I see the status of the collector in the controller "Access denied for user 'root'@'localhost' (using password: YES)"   I am using linux system and database MYSQL is in the folder of EC2 instance /var/mysql Any information will help Regards, Lucky  
Hello all,   I am working on getting specific entries deleted once the search runs and holds true. Below is the detailed outline of what I am trying to achieve.   The recovery_flag in the kv sto... See more...
Hello all,   I am working on getting specific entries deleted once the search runs and holds true. Below is the detailed outline of what I am trying to achieve.   The recovery_flag in the kv store that contains the data of source is set to 1 and 0 based on the requirement. However, I am trying to delete the entries with recovery_flag = 0 on the next run of the  search, this way the unwanted entries are removed. Can you guide me through this.   Thank you.
Hello everyone, I am trying to add date input in setup.xml. I have seen input type=time is listed but it doesn't works in setup.xml, can you help me about how o
I have setup a global account and REST URL that allows a successful POST of a vmware bearer token.  How do I then extract that token to be used in subsequent data inputs?  Or is this not possible wit... See more...
I have setup a global account and REST URL that allows a successful POST of a vmware bearer token.  How do I then extract that token to be used in subsequent data inputs?  Or is this not possible with REST and should be done with a python module?  (trying to avoid python, I don't know python).
Hello, I am new to SPL language.  I have been working on 'geostats' recently and not quite sure what 'translatetoxy' or 'locallimit' are used for. Can anybody make clear how those parameters are used... See more...
Hello, I am new to SPL language.  I have been working on 'geostats' recently and not quite sure what 'translatetoxy' or 'locallimit' are used for. Can anybody make clear how those parameters are used?   Many Thanks!
Hi All, I have a case where we want to Whitelist servers using a CSV or  TXT file, I tried creating a simple CSV and tried to push the app with no joy, have been trying to look around for any examp... See more...
Hi All, I have a case where we want to Whitelist servers using a CSV or  TXT file, I tried creating a simple CSV and tried to push the app with no joy, have been trying to look around for any example implementation using the same and i was not able to find any. Could some body help me. For testing i am using [serverClass:test4] filterType = whitelist whitelist.from_pathname =/opt/splunk/etc/system/local/test.txt [serverClass:test4:app:test4] Thanks in advance M
Hi, I am using SQS-Based-S3 Inputs (multiple inputs retrieving from the queue) to ingest CloudTrail data. The documentation says the standard input supports exclude_describe_events and blacklist to ... See more...
Hi, I am using SQS-Based-S3 Inputs (multiple inputs retrieving from the queue) to ingest CloudTrail data. The documentation says the standard input supports exclude_describe_events and blacklist to filter out unwanted events. Just wondering if the same is supported when its a SQS Based S3 Input .  Note: Currently I am using PROPS/REGEX to exclude events but got it working only after a few attempts (after a few rounds of errors about MATCH_LIMIT being exceeded.  Thanks
My current search below pulls findings for current day and year-to-date starting 2/1/2021.  I need help with a way to pull for the previous business week and year-to-date starting 2/1/2021.  What adj... See more...
My current search below pulls findings for current day and year-to-date starting 2/1/2021.  I need help with a way to pull for the previous business week and year-to-date starting 2/1/2021.  What adjustments can i make to the search below that will pull that? index=overwatch-summary overwatch-vuln-type="*" | where _time>strptime("2021/02/01 00:00:00","%Y/%m/%d %H:%M:%S") | eval _time=if(_time < now()-86400, now()-86400, now()) | rex field=resource_id "subscriptions/(?<subscriptionId>[0-9a-fA-F\-]+)" | lookup subscription_managed.csv subscriptionId OUTPUT managed | fillnull value="Unmanaged" managed | search managed=Unmanaged | fillnull value="" blob_name | eval unique_id=if(isnotnull(unique_id),unique_id,sha256('overwatch-vuln-type' . "_" . resource_id . "_" . issue . blob_name))| chart dc(unique_id) as count over _time| bin _time span=1d | append [ stats c | eval _time=now() | eval count=0 | bin _time span=1d | fields _time count ] | stats sum(count) as count by _time
I have an implementation with Splunk cloud, as you know with this implementation in the cloud it would be the search head and the indexing and it would not have any type of management other than the ... See more...
I have an implementation with Splunk cloud, as you know with this implementation in the cloud it would be the search head and the indexing and it would not have any type of management other than the UI In the user's network all log sources (syslog, windows, linux, aws, database) arrive at a Heavy forwarder to be forwarded to splunk cloud There is a delay in the arrival of the logs but it only happens in one of the sources and it is from a WAF that is in an AWS server, the rest of the sources, linux, windows, database, arrive without delay. The logs that arrive from the WAF are stored in the HF using rsyslog and these logs arrive at the HF without delay. The problem is between the HF and Splunk cloud. It is not queuing because all the sources would be slow, but in this case only 1 of the sources is the one with delay.
How do I put Cluster Master in a maintenance mode? Can it be done via GUI ? Or have to be done via CLI cli only?
What are a few critical daily checks early in the day making sure the Splunk Enterprise & ES are healthy & functioning? I like the Monitoring console. Are there built in features in it that can help?
Hi! So ive been at this for hours attempting to use stats and transactions to do this. I have two events that look like the following: Event 1: (date) (connection=1234) (op=#) (BIND) (username=[user... See more...
Hi! So ive been at this for hours attempting to use stats and transactions to do this. I have two events that look like the following: Event 1: (date) (connection=1234) (op=#) (BIND) (username=[username]) Event 2: (date) (connection=1234) (op=#) (RESULT) (error=49) (INVALID CREDENTIALS) I want to create a pivot that has it so that usernames and invalid credentials can be grouped... right now I am doing the stats command, but not getting any results because these (username and error=49) are two different events. Unfortunately, these fields do not contain unique values among each other (same connection# is shared with many other events, same op# is shared with many others) The only thing I can think of is event2 comes directly after event1. Is there a way to group this based on time or perhaps eval? Any suggestions?
Hello all,   I need some assistance using the search below to produce a timechart of the number of events per day for the last 90 days.  index=wineventlog source="WinEventLog:Microsoft-Windows-Ter... See more...
Hello all,   I need some assistance using the search below to produce a timechart of the number of events per day for the last 90 days.  index=wineventlog source="WinEventLog:Microsoft-Windows-TerminalServices-LocalSessionManager/Operational" EventCode=25 | search Source_Network_Address="*" ComputerName="*" User="*" | eval "Source IP" = coalesce(Source_Network_Address,"") | eval clientip=Source_Network_Address | sort- _time | iplocation "Source IP" | where isnotnull(lat) | streamstats current=f global=f window=1 first(lat) as next_lat first(lon) as next_lon first(_time) as next_time first(clientip) as next_ip first(Country) as next_country first(Region) as next_region by User | strcat lat "," lon pointA | haversine originField=pointA units=mi inputFieldLat=next_lat inputFieldLon=next_lon outputField=distance_miles |strcat next_lat "," next_lon pointB |eval time_dif=(((next_time - _time)/60)/60), distance_miles=round(distance_miles, 2), time_dif=round(time_dif, 2)
Hi Everyone, I am having one query like this: index=abc ns=xyz  app_name=api  "ARC EVENT RECEIVED FROM SOURCE"| rex "RID:(?<RID>(\w+-){4}\w+)-(?<sourceagent>\w+-\w+)" | stats count(RID) as count, ... See more...
Hi Everyone, I am having one query like this: index=abc ns=xyz  app_name=api  "ARC EVENT RECEIVED FROM SOURCE"| rex "RID:(?<RID>(\w+-){4}\w+)-(?<sourceagent>\w+-\w+)" | stats count(RID) as count, values(RID) as RID by sourceagent| rename sourceagent as "Source"|fields Source count I am using bar chart so I am getting source on Y axis and count on X axis. The issue I am facing is I am getting count values in X axis as 1.1, 2.5,1,3 on  Xaxis  I want it to be numeric distinct values. like 1,2,3,4 Below is my query . Can anyone guide me on that. <panel> <title>Number of Requests Received from Source</title> <chart> <search> <query>index=abc ns=xyz app_name=api "ARC EVENT RECEIVED FROM SOURCE"| rex "RID:(?&lt;RID&gt;(\w+-){4}\w+)-(?&lt;sourceagent&gt;\w+-\w+)" | stats count(RID) as count, values(RID) as RID by sourceagent| rename sourceagent as "Source"|fields Source count</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">bar</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">right</option> <option name="charting.lineWidth">2</option> </chart> </panel>
Hi, I getting started with splunk and the pre requirement for it is ram 12 GB but my system is 8 GB, i need to help what way is better to install splunk, vm or docker or ... . thanks.
I would like to be able to retrieve the name of the current search to pass to a macro in the search. Saved Search name in app "Access - Cleartext Password At Rest" | from datamodel:"Compute_Invento... See more...
I would like to be able to retrieve the name of the current search to pass to a macro in the search. Saved Search name in app "Access - Cleartext Password At Rest" | from datamodel:"Compute_Inventory"."Cleartext_Passwords" | `get_info($SEARCH_NAME$)` | stats max(_time) as "lastTime",latest(_raw) as "orig_raw",values(tag) as "tag",count by "dest","user","password" Macro "get_info" Argument: searchname lookup searchparms $searchname$ So in this example when the scheduled search "Access - Cleartext Password At Rest" is run, it would lookup information from "searchparms" for "Access - Cleartext Password At Rest"
Hi, When trying to call some rest API's in a custom script using the request package, if the URL is https Splunk throws the error: "Can't connect to HTTPS URL because the SSL module is not availa... See more...
Hi, When trying to call some rest API's in a custom script using the request package, if the URL is https Splunk throws the error: "Can't connect to HTTPS URL because the SSL module is not available." Anyone know why this might not be available, or how to get around it. Looking at other Splunk scripts, they all reference http not https. Tried manually adding the ssl module into the TA bin folder but it throws more errors.