All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi team, I have below query  index=*bizx_application AND sourcetype=perf_log_bizx AND AutoSaveForm OR SaveFormV2 OR SaveForm | timechart count by SFDC useother=false limit=0   the timechart retu... See more...
Hi team, I have below query  index=*bizx_application AND sourcetype=perf_log_bizx AND AutoSaveForm OR SaveFormV2 OR SaveForm | timechart count by SFDC useother=false limit=0   the timechart returned as below.     Now I want to adjust the _time scale in x axis to display from latest to earliest which means put the latest _time and corresponding count in the left.   How should I modify my query to achieve this adjustment?
Hi Team I am running a tstats count on my accelerated data model for certain time periods. So the result which I am getting when I ran the query today is different to what I am getting tomorrow.  Th... See more...
Hi Team I am running a tstats count on my accelerated data model for certain time periods. So the result which I am getting when I ran the query today is different to what I am getting tomorrow.  Though all the parameters i.e. the query & the time period for which this query is intended to run, are all the same. Can anybody please help me understand why is this happening and how can we fix this so that we can get the right result everytime when this query is run as we need to populate the dashboards based on this query?  Thanks and regards AG
Hi All, I would like to extract the values from addtotals. My current result from my search is as follows; _time fielda fieldb fieldc fieldd Total by Day day_1 1 2 1 2 6 day_2 ... See more...
Hi All, I would like to extract the values from addtotals. My current result from my search is as follows; _time fielda fieldb fieldc fieldd Total by Day day_1 1 2 1 2 6 day_2 2 2 2 2 8 day_.... 1 1 1 1 4 day_n 3 1 2 4 11 Total by field 7 6 6 9     The last lines of my search is as follows:   |.... |timechart span=1d sum(A) as B by C limit=0 |addtotals col=t labelfield=_time label="Total by Field" fieldname="Total by Day"     After the 'addtotals' portion, I would like to extract out the values of "Total by Field" (shown in bold and underlined) for further calculations. Or if there is an alternative method to use instead of 'addtotals' I've tried stats sum and eval to do so but couldn't do it.   Anyone here able to advise on this?   Thanks and advance!
Hi  We have installed " Splunk for AWS", how the below alert is not working and search result turn up as " No result found "    `aws-cloudtrail-sourcetype` eventName=StopInstances OR eventName=Reb... See more...
Hi  We have installed " Splunk for AWS", how the below alert is not working and search result turn up as " No result found "    `aws-cloudtrail-sourcetype` eventName=StopInstances OR eventName=RebootInstances OR eventName=TerminateInstances NOT errorCode | rename "requestParameters.instancesSet.items{}.instanceId" AS instanceId | stats values(instanceId) as instanceId count(instanceId) as count by awsRegion eventName eventTime userIdentity.arn eventID  
Hello, Today I watched a session at .conf20 that piqued my interest about the Deep Learning Toolkit. So of course I hopped over to Splunkbase and added it to my local dev environment. Then quickly f... See more...
Hello, Today I watched a session at .conf20 that piqued my interest about the Deep Learning Toolkit. So of course I hopped over to Splunkbase and added it to my local dev environment. Then quickly found out that version 4 is only available via GitHub (for now). I was able to successfully clone the repo and copy the contents to the etc/apps/dltk directory. The following documentation worked exactly as expected: Install I skipped the steps to apply role. I have confirmed the roles were added to the environment. However, I cannot complete the second half of the following document (validation). Connecting Environment If I click on the Algorithms menu item, I have nothing but a create button. I have tried to create one with no luck either: Name: MyAlgo Runtime: Base Environment: <one created after installation>  The system will say "Saving..... Please be patient" for about 3 mins then yields this (only after upgrading to S/E 8.1):   {"messages":[{"type":"WARN","text":"Could not parse xml reply (no reply from script). See splunkd.log for more info."}]}   splunkd.log:   10-21-2020 15:24:33.780 -0600 WARN HttpListener - Socket error from 127.0.0.1:64908 while accessing /services/dltk/deployments: Winsock error 10053 10-21-2020 15:24:39.825 -0600 WARN DispatchSearchMetadata - could not read metadata file: C:\Splunk\var\run\splunk\dispatch\admin__admin__dltk__search10_1603315479.70\metadata.csv 10-21-2020 15:24:39.825 -0600 WARN DispatchSearchMetadata - could not read metadata file: C:\Splunk\var\run\splunk\dispatch\admin__admin__dltk__search10_1603315479.70\metadata.csv 10-21-2020 15:24:39.825 -0600 WARN DispatchSearchMetadata - could not read metadata file: C:\Splunk\var\run\splunk\dispatch\admin__admin__dltk__search10_1603315479.70\metadata.csv 10-21-2020 15:24:39.825 -0600 WARN DispatchSearchMetadata - could not read metadata file: C:\Splunk\var\run\splunk\dispatch\admin__admin__dltk__search10_1603315479.70\metadata.csv 10-21-2020 15:24:39.826 -0600 WARN DispatchSearchMetadata - could not read metadata file: C:\Splunk\var\run\splunk\dispatch\admin__admin__dltk__search10_1603315479.70\metadata.csv 10-21-2020 15:24:39.826 -0600 WARN DispatchSearchMetadata - could not read metadata file: C:\Splunk\var\run\splunk\dispatch\admin__admin__dltk__search10_1603315479.70\metadata.csv 10-21-2020 15:24:39.826 -0600 WARN DispatchSearchMetadata - could not read metadata file: C:\Splunk\var\run\splunk\dispatch\admin__admin__dltk__search10_1603315479.70\metadata.csv 10-21-2020 15:24:39.826 -0600 WARN DispatchSearchMetadata - could not read metadata file: C:\Splunk\var\run\splunk\dispatch\admin__admin__dltk__search10_1603315479.70\metadata.csv 10-21-2020 15:26:06.247 -0600 WARN HttpListener - Socket error from 127.0.0.1:65292 while accessing /services/dltk/deployments: Winsock error 10053 10-21-2020 15:29:34.956 -0600 INFO MetricSchemaProcessor - log messages will be throttled. POST to /services/admin/metric-schema-reload/_reload will force reset of the throttle counters 10-21-2020 15:29:35.286 -0600 INFO IndexWriter - Creating hot bucket=hot_v1_61, idx=_telemetry, event timestamp=1603315775, reason="suitable bucket not found, number of hot buckets=0, max=3" 10-21-2020 15:29:35.297 -0600 INFO DatabaseDirectoryManager - idx=_telemetry writing a bucket manifest in hotWarmPath='C:\Splunk\var\lib\splunk\_telemetry\db' pendingBucketUpdates=1 innerLockTime=0.016. Reason='New hot bucket bid=_telemetry~61~5CCEBD6A-D90D-4E2E-9256-E10D2DC91EE3 bucket_action=add' 10-21-2020 15:29:35.306 -0600 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=C:\Splunk\var\lib\splunk\_telemetry\db duration=0.016 10-21-2020 15:40:49.523 -0600 WARN HttpListener - Socket error from 127.0.0.1:52171 while accessing /services/dltk/algorithms: Winsock error 10053 10-21-2020 15:44:54.837 -0600 WARN HttpListener - Socket error from 127.0.0.1:52907 while accessing /services/dltk/algorithms: Winsock error 10053 10-21-2020 15:47:37.303 -0600 WARN HttpListener - Socket error from 127.0.0.1:53519 while accessing /services/dltk/algorithms: Winsock error 10053 10-21-2020 15:50:19.236 -0600 WARN HttpListener - Socket error from 127.0.0.1:54415 while accessing /services/dltk/algorithms: Winsock error 10053     Here's my config:
Hello plp.  At the moment i need to upgrade a bunch  of Ufs (linux and windows), from versions 6 & 7 to 8.0.  I have search this 2 apps in splunk base that do that upgrade, i used it in my own ... See more...
Hello plp.  At the moment i need to upgrade a bunch  of Ufs (linux and windows), from versions 6 & 7 to 8.0.  I have search this 2 apps in splunk base that do that upgrade, i used it in my own laboratory but i cant make forwarder upgrade.  Apps: https://splunkbase.splunk.com/app/5003/ & https://splunkbase.splunk.com/app/5004/ This apps can help me? or any1 have some guide to upgrading this UFs?  Thanks a lot!! 
I have a requirement to hide the number of values displayed for each row in a table. For example- If I have 50 values for an IP address then I want to show only 2 values, and rest should be hidden w... See more...
I have a requirement to hide the number of values displayed for each row in a table. For example- If I have 50 values for an IP address then I want to show only 2 values, and rest should be hidden with a link “show hidden values”. @niketn 
Is there a clear list of pros and cons of using HEC vs Heavy forwarders   Also, are there any best practices or preferences of using these 2 options in a given setting
I have a search query that gives me a count of vulnerabilities broken down by age (in days).  I want to be able to have a different color for each column.  The columns are, in days, 0-30 (light green... See more...
I have a search query that gives me a count of vulnerabilities broken down by age (in days).  I want to be able to have a different color for each column.  The columns are, in days, 0-30 (light green), 31-60 (orange), 61-90 (light blue), 91-180 (yellow), and Older than 180 (red).  When I use 'charting.seriesColors', all of the columns turn light green. Search query:  index=qualys_host_detection TYPE=CONFIRMED ("severity=3" OR "severity=4" OR "severity=5") OS="Windows Server*" OR "Microsoft Windows Server" OR "VMWare" STATUS!=FIXED | dedup QID, HOST_ID | eval firstseen=strptime(FIRST_FOUND_DATETIME, "%Y-%m-%dT%H:%M:%S"), epochnow=now(), duration=round((epochnow-firstseen)/86400,0), days=case(duration<=30, "0-30", duration>30 AND duration<=60, "31-60", duration>60 AND duration<=90, "61-90", duration>90 AND duration<=180, "91-180", duration>180, "Older than 180") | stats count by days
Hello, I'm a total Splunk novice, so sorry if this is a completely obvious solution. I have a SingleValue visualization that I'd like to add a trend component to (so I'm switching from `stats coun... See more...
Hello, I'm a total Splunk novice, so sorry if this is a completely obvious solution. I have a SingleValue visualization that I'd like to add a trend component to (so I'm switching from `stats count` to `timechart count`. The issue is that I want the discrete events to be aggregated into a single count based on a span consistent with the time picker. The default timechart behavior has all events being counted separately. Example: Timepicker input is set to last 24 hours. I now want my timechart command to have a span of 24h. This should work dynamically with any timepicker value.  From what I've researched so far, it looks as though I need to mess around with the source xml, and some tokens, but I'm not sure what exactly to do. I tried to simply set `span = $time_tok$`, but that was not successful.  Thanks for the help in advance!
Hi, I have some troubles setting up the following topology. There is 1 UF which needs to forward unCooked raw data to a 3rd party receiver that is distributed and consists of 2 nodes.         [... See more...
Hi, I have some troubles setting up the following topology. There is 1 UF which needs to forward unCooked raw data to a 3rd party receiver that is distributed and consists of 2 nodes.         [indexAndForward] index = false [tcpout:splunk-searchhead-group] disabled = false server = so1:9997 [tcpout-server://so1:9997] [tcpout-server://3rd_party_node_1:3535] [tcpout-server://3rd_party_node_2:3535] [tcpout] defaultGroup = splunk-searchhead-group [tcpout:default-autolb-group] disabled = false server = 3rd_party_node_1:3535,3rd_party_node_2:3535 sendCookedData = false forceTimebasedAutoLB = true autoLBVolume = 2 autoLBFrequency = 5 maxQueueSize = auto indexAndForward = false blockOnCloning = true compressed = false dropClonedEventsOnQueueFull = 5 dropEventsOnQueueFull = -1 heartbeatFrequency = 30 maxFailuresPerInterval = 2 secsInFailureInterval = 1 maxConnectionsPerIndexer = 2 connectionTimeout = 20 readTimeout = 300 writeTimeout = 300 tcpSendBufSz =     What happens in reality is that both 3rd_party_node_1 & 2 receive exactly the same data, it looks like data cloning in stead of load balancing. Is there anything off in this config or is load balancing not possible with 3rd party receivers? Thanks  
Hi! I am looking to try to standardize my configuration across my Search Head Cluster. I have 15 Search Heads, and what I am looking to to is move my etc/system/local configs to a searchhead app (let... See more...
Hi! I am looking to try to standardize my configuration across my Search Head Cluster. I have 15 Search Heads, and what I am looking to to is move my etc/system/local configs to a searchhead app (let's call it etc/apps/searchhead).    Looking at my files, most of them should be fine, but I was wondering about the syntax for the distsearch.conf lookups. What I have now is like: lkp1 = apps/idm_search/lookups/lkpInterceptAttempt.csv Would that same path find the file when the file is in /opt/splunk/etc/apps/searchhead/distsearch.conf? Or do I have to be more explicit about it's location? Thanks! Stephen
Hello everyone! I have clustered infrastructure (simplified) 2 SH (cluster) + 2 Indexer (cluster) + Heavy Forwarder (name HF) On HF i run some script which returns me json file, and i forward it f... See more...
Hello everyone! I have clustered infrastructure (simplified) 2 SH (cluster) + 2 Indexer (cluster) + Heavy Forwarder (name HF) On HF i run some script which returns me json file, and i forward it from HF to Indexers  (HF -> IndexCluser) After that, i have to make some searches on SH with that data When i make search request, i have correctly parsed json, look perfect. BUT when i use `table` or just expand results each json field are dublicated. I have custom sourcetype defined on the Heavy Forwarder (although i tried some variations):   [just_json] INDEXED_EXTRACTIONS = json KV_MODE = none AUTO_KV_JSON = false NO_BINARY_CHECK = true pulldown_type = true category = Application   I assume that it multiplies on two because of: json parsed during indexing (or sendind from Heavy?) json parsed additionally on searchHead during search performed I have read some similar questions (not sure about cluster case) but haven't succeed. Still cant figure out. Thanks in advance.  
Dear support in the form below, I have the following issues: 1. Empty pie-chart named Domains for field dest_nt_host 2. Empty RecordNumber and dest_nt_host at the (single) stats table in the end... See more...
Dear support in the form below, I have the following issues: 1. Empty pie-chart named Domains for field dest_nt_host 2. Empty RecordNumber and dest_nt_host at the (single) stats table in the end           <form> <label>Win Domain Logon Success</label> <search id="win_dm_logon_sc"> <query>index=os_windows EventCode=4776 Error_Code=0x0 | search user="$field_user$" Source_Workstation="$field_ws$"</query> <earliest>$field_time.earliest$</earliest> <latest>$field_time.latest$</latest> </search> <fieldset submitButton="false"> <input type="time" token="field_time"> <label>Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="text" token="field_user" searchWhenChanged="true"> <label>User</label> <default>*</default> </input> <input type="text" token="field_ws" searchWhenChanged="true"> <label>Workstation</label> <default>*</default> </input> </fieldset> <row> <panel> <title>Windows Domain Logons</title> <chart> <search base="win_dm_logon_sc"> <query>timechart count</query> </search> <option name="charting.chart">line</option> <option name="charting.drilldown">none</option> <option name="charting.legend.placement">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> <row> <panel> <title>Events</title> <single> <search base="win_dm_logon_sc"> <query>stats count</query> </search> <option name="drilldown">none</option> </single> </panel> <panel> <title>Users</title> <chart> <search base="win_dm_logon_sc"> <query>stats count by user | rename user as User</query> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> </chart> </panel> <panel> <title>Workstations</title> <chart> <search base="win_dm_logon_sc"> <query>stats count by Source_Workstation | rename Source_Workstation as Workstation</query> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> </chart> </panel> <panel> <title>Domains</title> <chart> <search base="win_dm_logon_sc"> <query>stats count by dest_nt_host | rename dest_nt_host as Dest_Domain</query> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> </chart> </panel> </row> <row> <panel> <table> <title>Windows Domain Successful Logons</title> <search base="win_dm_logon_sc"> <query>table _time RecordNumber user Source_Workstation dest_nt_host </query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <option name="wrap">false</option> </table> </panel> </row> </form>       both fields do exist and do have data - 100%. I can verify this when I click on the magnifier search button and open them in a search. cannot find why. please advise best regards Altin  
Hi,   What is the best way to specify the custom index in which I want to ingest data in SPLUNK.   1) Should I use lambda to specify the custom index and return the result to Kinesis and Kinesis ... See more...
Hi,   What is the best way to specify the custom index in which I want to ingest data in SPLUNK.   1) Should I use lambda to specify the custom index and return the result to Kinesis and Kinesis will ingest the data 2) Should I use lambda directly to ingest data 3) Can kinesis specify custom index based on cloud watch group so that I can eliminate use of lambda
in Smartstore, How long do data stay in local storage after being fetched from a remote storage?
Hi, I am trying to build a result in tabular format. timestamp prcs_nm  outcome date normal time stamp prcs_nm Fail 2020-10-19 normal time stamp prcs_nm Fail 2020-10-19 normal ... See more...
Hi, I am trying to build a result in tabular format. timestamp prcs_nm  outcome date normal time stamp prcs_nm Fail 2020-10-19 normal time stamp prcs_nm Fail 2020-10-19 normal time stamp prcs_nm Fail 2020-10-20 normal time stamp prcs_nm Fail 2020-10-21 normal time stamp prcs_nm Pass 2020-10-21   This is the table, that I am currently getting. But I need the query to take the last value from the date field, and should get all the records that shares the same date field. The date field has custom input, and it changes always. The query, that must be built, must give the table in this way and not in the way I am currently getting it. I tried dedup, but i cannot give a sure count as to how many records will be there at the events log. timestamp prcs_nm  outcome date normal time stamp prcs_nm Fail 2020-10-21 normal time stamp prcs_nm Pass 2020-10-21   The case always would be to have the last updated date in the date column and find all the records that shares the same date, irrespective of the prcs_nm and outcome there is. And I need the results in tabular format. Would really appreciate for any help. Thanks a lot in advance.
I want to know how can I extract show source code from event action type. I tried using _raw and and rex command. I even tried using sed and regex but didn't work. 
Hi guys, I need to configure an alert when people access as root in a server and for that I have two types of events: one that contains when people accesses as root: Oct 16 15:52:55 *host* sshd[1087... See more...
Hi guys, I need to configure an alert when people access as root in a server and for that I have two types of events: one that contains when people accesses as root: Oct 16 15:52:55 *host* sshd[10873]: Accepted password for root from *IP* port 49745 ssh2 And another that contains the person that was using that IP (used to log as root) in the moment the conexion was established: Oct 16 17:09:11 *host* openvpn[20236]: *user*/:1194 MULTI_sva: pool returned IPv4=*IP*, IPv6=(Not enabled) So I need to correlate this two types of events in order to know which persons were using the IP that logged as root in the moment that happened. This is the search i've been using: index=wineventlog eventtype=windows_logon_success | eval so="Windows" | eval user=if(isnull(Nombre_de_cuenta),user,mvindex(Nombre_de_cuenta, -1)) | append [search index=os source="/var/log/secure" user="root" eventtype=sshd_authentication | eval so="Linux"] | rex field=dest "(?<dest>.+?)\." | rex field=src "(?<src_hostname>.+?)\." | eval src=if(len(src_hostname)>2,src_hostname,src) | where src!=dest | search user=root | rex "m\s(?<IP>\S+)" | search IP=* | append [search index=gw_pfsense "openvpn" IPv4=*] | rex "for\s(?<root>\S+)" | convert ctime(_time) as time   What else could I try? Thanks in advance
So, if I have an index=abc with fields a,b Also, I have index=xyz with fields b,c Now I want to count the results where a="foo", c="bar" and b from both indices are common. I want to do this withou... See more...
So, if I have an index=abc with fields a,b Also, I have index=xyz with fields b,c Now I want to count the results where a="foo", c="bar" and b from both indices are common. I want to do this without join because of the maxout limitation. A sample query with join is:   index="abc" a="foo" | join type=inner b [search(index="xyz" c="bar")] | timechart span="1h" count as foobar   Can someone help with a query giving the same result without join?