All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to find source of logs from where we are receiving logs, like datamodel is ingesting logs from which source and which datamodel is used for correlation.
Is there a way to rename the extracted fields in the Interesting Fields section? Example would be Interesting Fields xxxxxname1xxxx -> name1 Thanks in advance
I cannot find this question being asked this way round, so hopefully its not a duplicate. I have a lookup CSV like this: ip,ip-info,timestamp 1.2.3.4,Text about the IP,2020-04-16T17:20:00 4.3... See more...
I cannot find this question being asked this way round, so hopefully its not a duplicate. I have a lookup CSV like this: ip,ip-info,timestamp 1.2.3.4,Text about the IP,2020-04-16T17:20:00 4.3.2.1,Different Text Here,2020-01-01T09:00:00 My log source summaries IPs to CIDR subnet, I have no control over this, extracted field looks like this cidr=1.2.3.0/24 . I need to use the CIDR from the log source to get the entrie(s) within that subnet via a lookup, e.g. index=myindex | lookup ip-info ip AS cidr OUTPUT ip-info timestamp AS ip-info-timestamp I have tried adding match_type = CIDR(ip) to the [ip-info] stanza in $SPLUNK_HOME/etc/apps/search/local/transforms.conf but I think this is for looking up an IP against CIDR masks, rather than the way round I need, as no results are returned. Is this possible, if so, how do I achieve it? Thanks
Greetings, Our developers are logging what user views a particular web page and flag it via the "ID" field. If a user also runs a query within the web page during that session, it logs the query ... See more...
Greetings, Our developers are logging what user views a particular web page and flag it via the "ID" field. If a user also runs a query within the web page during that session, it logs the query in a different table using the "URL_REQUEST_ID". The ID and the URL_REQUEST_ID are the same value. How can I join the two searches based on the value in the "ID" field in that first search I mentioned. Basically I want to list the pages they viewed and any corresponding queries they ran in one report/output. Thanks for any help.
Hi Team, We are using "Splunk add on for AppDynamics" in our heavy forwarder to poll data to Splunk. We are facing high cpu utilization in the server after we configured the inputs in this add on. ... See more...
Hi Team, We are using "Splunk add on for AppDynamics" in our heavy forwarder to poll data to Splunk. We are facing high cpu utilization in the server after we configured the inputs in this add on. Once i turned off the inputs, the CPU utilization came back to normal. Requesting your inputs to resolve this issue. Add on Version: 1.7.5 CPU cores: 8
Hello, I'm trying the following request in Postman to send a request to get the list of Active Directory users: http://:8000/en-GB/manager/search/authentication/users | fields title roles realname... See more...
Hello, I'm trying the following request in Postman to send a request to get the list of Active Directory users: http://:8000/en-GB/manager/search/authentication/users | fields title roles realname The request header has the basic authencation and I get a 200 response. However, the response body is not what I expected. I'm getting the following response body: <!doctype html> <!--[if lt IE 7]> <html class="no-js ie lt-ie9 lt-ie8 lt-ie7"> <![endif]--> <!--[if IE 7]> <html class="no-js ie7 lt-ie9 lt-ie8"> <![endif]--> <!--[if IE 8]> <html class="no-js ie8 lt-ie9"> <![endif]--> <!--[if IE 9]> <html class="no-js ie9"> <![endif]--> <!--[if gt IE 9]><!--> <html class="no-js"> <!--<![endif]--> <head> <meta charset="utf-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <title></title> <meta name="description" content="listen to your data" /> <meta name="author" content="Splunk Inc." /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <meta name="referrer" content="origin" /> <script type="text/json" id="splunkd-partials"> {"/services/session":{"messages":[],"links":{},"entry":[{"fields":{"optional":[],"required":[],"wildcard":[]},"acl ..... </script> <script> When click on Postman's Preview tab, it says "Splunk relies on JavaScript to function properly. Please enable JavaScript and then refresh the page to login." Is my reqest correct for Splunk Enterprise version 7.3.1) Note that I was able to get the expected result when I try the search command in the GUI: | rest /services/authentication/users | fields title roles realname Thanks in advance
I have the following search set up: search string | fields host raw | fields - _time _indextime _sourcetype _subsecond _serial _bkt _cd _si _kv _timediff | head 1 | join append [ stats co... See more...
I have the following search set up: search string | fields host raw | fields - _time _indextime _sourcetype _subsecond _serial _bkt _cd _si _kv _timediff | head 1 | join append [ stats count | fields - count ] | eval SourcePath=WHAT TO PUT HERE? | eval ConfigItem="Config Item" | eval PAGER="Pager" | eval TEAM="Team" | eval GROUP="Group" | eval SHORTDESCRIPTION="Short Description" | table host _raw SourcePath Config.Item PAGER TEAM GROUP SHORTDESCRIPTION | rex mode=sed "s/\,//g" | rex mode=sed "s/[^a-zA-Z0-9-.]+/ /g" | outputcsv file.csv Everything is working as required, I am just not too sure what should I match eval SourcePath= with in order to obtain the string of the source log file's path? Anyone able to assist? Thanks a bunch! Tom
I would like to be able to run the forwarder in a container, and have it forward my host logs from /var/log. So I mount the host /var/ into the container under /var/hostvar and run the container (in... See more...
I would like to be able to run the forwarder in a container, and have it forward my host logs from /var/log. So I mount the host /var/ into the container under /var/hostvar and run the container (in privileged mode) as such: docker run -ti --rm --privileged -v /var:/var/hostvar --network host --env "SPLUNK_START_ARGS=--accept-license" --env "SPLUNK_FORWARD_SERVER=10.166.11.158:9997" --env "SPLUNK_PASSWORD=P@ssw0rd" --env "SPLUNK_ADD=/var/hostvar/log/" --name uf splunk/universalforwarder:latest The directory is mapped correctly, however I get no data out of the forwarder (I can ping the splunk target from wthin the container) The problem seems to be that despite being marked as privileged, it cannot read my host kern.log file. I did a docker exec into the container to cat the file and I get: #docker exec -ti uf cat /var/hostvar/log/kern.log cat: /var/hostvar/log/kern.log: Permission denied I tried to do the same thing (mount that directory, and cat the file) using busybox and it worked just fine. Anybody have any thoughts?
Is there any way to send already indexed splunk data (from one index) into Azure data lake storage or Azure Blob storage? Splunk data >> Azure Datalake storage/Blob storage
This is a style question as I've already gotten my results but I was curious to see others methodology. So following the information in this AWS post I did the following Search for userIdentit... See more...
This is a style question as I've already gotten my results but I was curious to see others methodology. So following the information in this AWS post I did the following Search for userIdentity.type=AssumedRole Inner Join on userIdentity.accessKeyId with results with search for eventName=AssumeRole , deduped responseElements.credentials.accessKeyId , renamed to userIdentity.accessKeyId My final search looks like this index="aws_cloudtrail" userIdentity.type=AssumedRole | join type=inner userIdentity.accessKeyId [| search index="aws_cloudtrail" eventName=AssumeRole | dedup responseElements.credentials.accessKeyId | spath "userIdentity.principalId" | rex field=userIdentity.principalId "\:(?<principalId>.*)" | rename requestParameters.roleArn as requestedRole, responseElements.credentials.accessKeyId as userIdentity.accessKeyId | fields requestedRole, principalId, userIdentity.accessKeyId] | table _time, principalId, requestedRole, eventName, requestParameters.bucketName, errorCode
Feb 18 18:36:20 smtp2 sm-mta[17872]: l1J0a3fO017872: discarded I have one sample event. when I this it gives me "could not use strptime to parse timestamp" error. picture as attached. below ... See more...
Feb 18 18:36:20 smtp2 sm-mta[17872]: l1J0a3fO017872: discarded I have one sample event. when I this it gives me "could not use strptime to parse timestamp" error. picture as attached. below is my sample props.conf [ email_log ] BREAK_ONLY_BEFORE=\w+\s+\d+\s+\d+:\d+:\d+ CHARSET=AUTO MAX_TIMESTAMP_LOOKAHEAD=15 NO_BINARY_CHECK=true SHOULD_LINEMERGE=true TIME_FORMAT=%a %d %H:%M:%S disabled=false pulldown_type=true
Has anyone created python script to update knowledge object permissions in bulk?
Hello, I tried to install Cisco ACI Add-on for Splunk Enterprise on a splunk server in version 7.2.3. After completing the form with the APIC IP address, local username and password, I get the fo... See more...
Hello, I tried to install Cisco ACI Add-on for Splunk Enterprise on a splunk server in version 7.2.3. After completing the form with the APIC IP address, local username and password, I get the following error message: "Error connecting to /servicesNS/nobody/TA_cisco-ACI/apps/local/TA_cisco-ACI/setup: ('The read operation timed out',)" Can you help me understand the meaning of this message? Is this a Splunk App or Server problem? FYI, I launched a Wireshark to check if the Splunk server initiated a connection with the Apic. No packet has left the server. The problem is therefore local to the Splunk server. Thanks in advance
Hello, I'm trying to figure out how to use Splunk to monitor payments processing, one of the business rules is to trigger 1 alert (and only 1) per payment as soon as it is "late". a late payment m... See more...
Hello, I'm trying to figure out how to use Splunk to monitor payments processing, one of the business rules is to trigger 1 alert (and only 1) per payment as soon as it is "late". a late payment means it is not processed in a predefined time window. I have the search query that returns the results I needed. But the challenges/prerequisites are : - there's no per-event alert in Splunk, only per-result, which means a search query that returns 2 events will trigger 1 alert. - having a search query that returns only 1 late payment at a time, in my case, is not possible. - plus, I have a KPI "Nb of late payments" that needs to be decreased if the alerts on payments are deleted (via "Delete" action in Triggered Alert page). Ex of a scenario : I have 10 ongoing late payments, i want to yield 10 alerts individually. Then, if I delete 1 alert, I need to somehow "acknowledge" the payment to tell Splunk to : 1) stop yielding alert on this payment 2) add some data/flag/boolean to the payment so I can use it to filter the KPI to decrease its value (ex : search alert_acked=false") Is it possible in Splunk to handle easily this scenario ? Is there another way to achieve the same functionality ? Thanks in advance for your help.
Hi all, starting from a csv which contains 10 items, I want to count ( in order to produce a chart ) all the matches in the index and all the matches where a specific fields is equal to 1 . So two ... See more...
Hi all, starting from a csv which contains 10 items, I want to count ( in order to produce a chart ) all the matches in the index and all the matches where a specific fields is equal to 1 . So two values for each item ( the first all matches and seconds all the matches where another field = 1 ) If no matches were found the value for csv items should be 0. I hope it's clear I try to do it whitout csv, but if an item has no matches in the index that item is not shown in the chart ) Thanks Fabrizio
Hi All , I am working in cluster environment with 16 prod indexers, and one separate cluster master node. if I run /opt/splunk/bin/splunk list cluster-peers if i run the above command in m... See more...
Hi All , I am working in cluster environment with 16 prod indexers, and one separate cluster master node. if I run /opt/splunk/bin/splunk list cluster-peers if i run the above command in master node command prompt which showing all indexers in the cluster but if run same command on any of the indexers showing master node not enabled. how to enable master node on each indexer . this case came while synch or pairing indexer with master node. How to enable master node on all indexers with command line prompt.
Hello, I would like to know what is the meaning of the "is_service_max_severity_event" field. It appears that in my ITSI instance, each entry for service level KPI (is_service_aggregate = 1) ha... See more...
Hello, I would like to know what is the meaning of the "is_service_max_severity_event" field. It appears that in my ITSI instance, each entry for service level KPI (is_service_aggregate = 1) has 2 event, one with is_service_max_severity_event set to 0, and the other one with it set to 1. Can somebody explain? Thank you in advance.
I have multiple events which are coming as one and I need to separate them into separate events in order to create a table and etc Is there a way to do it in the search time? { "Tim... See more...
I have multiple events which are coming as one and I need to separate them into separate events in order to create a table and etc Is there a way to do it in the search time? { "Timestamp": "2020-02-08T15:45:00.036Z", "Query Parameters": "", "RequestMethod": "POST", "Request": "{tt}", "Response": "{tt}", "HTTPStatusCode": "200", "TotalResponseTimeApprox.(ms)": "290.0", "TargetResponseTime(ms)": "241.0" }{ "Timestamp": "2020-02-08T15:45:00.334Z", "Query Parameters": "", "RequestMethod": "POST", "Request": "{tt}", "Response": "{tt}", "HTTPStatusCode": "200", "TotalResponseTimeApprox.(ms)": "290.0", "TargetResponseTime(ms)": "241.0" } Thank you
hi Friends, below are my queries, index=perfmon source="Perfmon:LogicalDisk" counter="% Free Space" | search host = DMOPWMD1PDDB0* | eval FreeSpace =100-( Value ) | stats min(FreeSpace) as... See more...
hi Friends, below are my queries, index=perfmon source="Perfmon:LogicalDisk" counter="% Free Space" | search host = DMOPWMD1PDDB0* | eval FreeSpace =100-( Value ) | stats min(FreeSpace) as hostavg by host,instance | table host,instance,hostavg | chart min(hostavg) by host,instance index=perfmon sourcetype="Perfmon:Memory" counter="% Committed Bytes In Use" | search host = DMOPWMD1PDDB0* | timechart perc90(Value) by host limit=0 span=1m i created the below search id's for the search and created the panels, these are working fine in search, but not working in dashboard\panels, the panels are showing "No Results", could you please advise. search id's: index=perfmon source="Perfmon:LogicalDisk" counter="% Free Space" $TimeRangePkr.earliest$ $TimeRangePkr.latest$ 5m delay true <query>index=perfmon sourcetype="Perfmon:Memory" counter="% Committed Bytes In Use"</query> <earliest>$TimeRangePkr.earliest$</earliest> <latest>$TimeRangePkr.latest$</latest> <refresh>5m</refresh> <refreshType>delay</refreshType> <progress> <set token="show_html">true</set> </progress> <done> <unset token="show_html"></unset> </done> Panels: <panel> <chart> <title>DISK%</title> <search base="Disk1"> <query>| search host = DMOPWMD1PDDB0* | eval FreeSpace =100-( Value ) | stats min(FreeSpace) as hostavg by host,instance | table host,instance,hostavg | chart min(hostavg) by host,instance</query> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.text">TIME</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.text">HOST</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">column</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">all</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.placement">right</option> <option name="refresh.display">progressbar</option> </chart> </panel> <panel> <chart> <title>MEMORY%</title> <search base="Mem"> <query>| search host = DMOPWMD1PDDB0* | timechart perc90(Value) by host limit=0 span=1m</query> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.text">TIME</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.text">HOST</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">line</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">all</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.placement">right</option> <option name="refresh.display">progressbar</option> </chart> </panel>
How would i need to modify the below query to get Memory value in percentage when the threshold exceeds 90. Kindly suggest host=ABC* index=perfmon sourcetype="PerfmonMk:Memory" Available_Bytes=* A... See more...
How would i need to modify the below query to get Memory value in percentage when the threshold exceeds 90. Kindly suggest host=ABC* index=perfmon sourcetype="PerfmonMk:Memory" Available_Bytes=* Available_KBytes=* Available_MBytes=* | stats by _time, Available_MBytes | table _time, Available_MBytes