All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Just wondering if there any way I can monitor Splunk Cloud licence externally.  Do Splunk offer any API calls to entitlement/support page? e.g on-prem has DMC  alert weeks before licence expires - s... See more...
Just wondering if there any way I can monitor Splunk Cloud licence externally.  Do Splunk offer any API calls to entitlement/support page? e.g on-prem has DMC  alert weeks before licence expires - similar but external for Splunk Cloud 
I have data being pushed onto Splunk in JSON format. What I am trying to do is combine events. For example, 2 events that have a common id should be merge onto one. So I have the following data: { s... See more...
I have data being pushed onto Splunk in JSON format. What I am trying to do is combine events. For example, 2 events that have a common id should be merge onto one. So I have the following data: { studentid: 1234 studentGrade:{ Math:{ grade: "A"} } } { studentid: 1234 studentGrade:{ Physics:{ grade: "C"} } }   As seen, I'd like to create the 2 events into 1 based on the studentId. To end up with a result like the following: Student Id Math Physics 1234 A C   Thank you in advance, very new in Splunk and I found it difficult to merge events based on other requests Ids. 
Hi,  In smart store splunk clusters with smart store enabled on all indexes with remotePath in  [default} stanza, Is there a way disable an index or make an index readonly  ?  # splunk smart-store ... See more...
Hi,  In smart store splunk clusters with smart store enabled on all indexes with remotePath in  [default} stanza, Is there a way disable an index or make an index readonly  ?  # splunk smart-store settings [default] remotePath = volume:remote_store/$_index_name disabled = <boolean> * Toggles your index entry off and on. * Set to "true" to disable an index. * CAUTION: Do not set this setting to "true" on remote storage enabled indexes. * Default: false isReadOnly = <boolean> * Whether or not the index is read-only. * If you set to "true", no new events can be added to the index, but the index is still searchable. * You must restart splunkd after changing this setting. Reloading the index configuration does not suffice. * Do not configure this setting on remote storage enabled indexes. * If set to 'true', replication must be turned off (repFactor=0) for the index. * Default: false  
Hi, I have a dataset like below: Date             Rsource status  10:00:00     A                Success 10:00:00     B                Success 10:00:01     A                Failure 10:00:02     ... See more...
Hi, I have a dataset like below: Date             Rsource status  10:00:00     A                Success 10:00:00     B                Success 10:00:01     A                Failure 10:00:02     A                Failure 10:00:02    C                Failure 10:00:02     B               Failure 10:00:02     A                Success 10:00:03     B                Success 10:00:03     A                Failure 10:00:04     A                Failure 10:00:04    C                Failure 10:00:04     B               Failure I am working on metric where by if we have more than n number of consecutive errors in 30s then those need to be recorded. output in formart like below: lets say in the above example we need it for more than 2 consecutive errors it should look something like this Min_Time Max_time resource status count 10:00:01 10:00:02  A    2 10:00:03 10:00:04  A    2 I am trying to use combination of streamstats/eventstats nothing seems working. any help would be much appreciated. one of the examples I tried below mysearch | eval OccurenceDate=strftime(_time,"%Y-%m-%d %H:%M:%S") | streamstats time_window=30s global=true min(OccurenceDate) as start max(OccurenceDate) as end count as numberofstatus BY status,resource_id reset_on_change=true|table start,end,start,resource,numberofstatus |streamstats first(start) as f_start last(end) as l_end max(numberofstatus) AS max_numberofstatus by code reset_on_change=true| table f_start,l_end,max_numberofstatus,code,resource
Hi I've got this webproxy ES base search where I'm trying to show high number of destinations from a low number of sources. I would also like to throttle the search by going back 24hr, then only look... See more...
Hi I've got this webproxy ES base search where I'm trying to show high number of destinations from a low number of sources. I would also like to throttle the search by going back 24hr, then only looking at the most recent time. How best to do that? | tstats `summariesonly` count min(_time) as first_seen max(_time) as last_seen values(Web.action) as Web.action from datamodel=Web.Web WHERE index=webproxy Web.action=allowed by Web.src Web.dest | `drop_dm_object_name("Web")` | convert ctime(*_seen) | lookup dnslookup clientip AS dest OUTPUT clienthost AS dest_host | lookup dnslookup clientip AS src OUTPUT clienthost AS src_host | search NOT src_host=drekar-rancher-ccena* | sort - count  
I'm tiring to build a dashboard where the color scheme is green and black.  The progress bar being blue is sort of ruining the design I do believe we want the progress bars to stay. I am aware of t... See more...
I'm tiring to build a dashboard where the color scheme is green and black.  The progress bar being blue is sort of ruining the design I do believe we want the progress bars to stay. I am aware of the ability to turn off the progress bar. 
Hello need help to extract the number from this result: Total number of files under /wmq/logs/AMXDEVRC120/active is: 184 i'm trying to get the total number of files from this directory and compare ... See more...
Hello need help to extract the number from this result: Total number of files under /wmq/logs/AMXDEVRC120/active is: 184 i'm trying to get the total number of files from this directory and compare if over 500.    thank you,   
Hi, I am facing a weird situation where SEDCMD is working perfectly for all log sources except one i.e. Splunk Stream Data. This what I have in props.conf on the HF. The data is collected using UF a... See more...
Hi, I am facing a weird situation where SEDCMD is working perfectly for all log sources except one i.e. Splunk Stream Data. This what I have in props.conf on the HF. The data is collected using UF and forwarded to HF which is then sent to Indexers.   [default] SEDCMD-hidepasswords = y/<string1>/<string2>/    The above regex is applied for all other source types but does not work on any source type generated from splunk stream data. The regex is correct as it works for other data which is in exact same form of the data from splunk stream.  I want to know is there a specific reason behind this? such Splunk Stream data not being in structured form like other text-based data sources. 
Hi all, I'm wondering if anyone has had success updating notable events using the Splunk SDK for Python (splunklib). I've seen a few examples of how to get it done with the splunk python package (fo... See more...
Hi all, I'm wondering if anyone has had success updating notable events using the Splunk SDK for Python (splunklib). I've seen a few examples of how to get it done with the splunk python package (for example https://www.splunk.com/en_us/blog/tips-and-tricks/how-to-edit-notable-events-in-es-programatically.html), but I'd prefer to leverage the Python SDK. I've formatted the POST request every way I can think of, but I can't get a proper request to the server. I always get the error: ``` splunklib.binding.HTTPError: HTTP 400 Bad Request -- b'"ValueError: One of comment, newOwner, status, urgency is required."' ``` I am passing a `comment` argument, but it must be doing it incorrectly.
My DNS is now only showing IP addresses in the logs. How do I get to see DNS names in the logs?
I have a scenario, in which I have an indexer instance with 2TB in / opt, but it is 92% full. What is the most efficient and safe way to migrate the indexes to a new instance or a new partition? Th... See more...
I have a scenario, in which I have an indexer instance with 2TB in / opt, but it is 92% full. What is the most efficient and safe way to migrate the indexes to a new instance or a new partition? Thanks in advance.
I have a table with a join, which means there are 2 sources - x and y.  I receive the logs from x first, I would like to load the information from source x  in the table even if source y is still emp... See more...
I have a table with a join, which means there are 2 sources - x and y.  I receive the logs from x first, I would like to load the information from source x  in the table even if source y is still empty and then, load the information that is missing from y once I received the logs. Is it possible to be done?
Hi all consider this search: source=bandwidth | timechart sum(packets_in) by host which will produce rows indexed by a timestamp, and columns headed by hostnames. I'd like to scale values in each... See more...
Hi all consider this search: source=bandwidth | timechart sum(packets_in) by host which will produce rows indexed by a timestamp, and columns headed by hostnames. I'd like to scale values in each column via division by the average of that column. How should I go about it?  Many thanks.
I want to see Event Description with File Create Time. But in mine, it didn't have it. Why? And hơ can I see it? This is mine This is what I want to see
I have a custom search command that extracts a domain name from a url string field you specify into a new "domain" field. This works fine on a dev cluster we have setup (3 search heads, 2 indexers). ... See more...
I have a custom search command that extracts a domain name from a url string field you specify into a new "domain" field. This works fine on a dev cluster we have setup (3 search heads, 2 indexers). For example this returns expected results: index=main | table _time url | mycustomcommand field_in=url but adding stats command at the end of the search causes the search to fail with the following error: index=main | table _time url | mycustomcommmand field_in=url | stats count by domain 2 errors occurred while the search was executing. Therefore, search results might be incomplete. Hide errors. [ip-{indexer_1_ip}] Streamed search execute failed because: Error in 'mycustomcommmand' command: External search command exited unexpectedly with non-zero error code 1.. [ip-{indexer_2_ip}] Streamed search execute failed because: Error in 'mycustomcommmand' command: External search command exited unexpectedly with non-zero error code 1.. Running the search directly on the indexer returns 0 results, because we don't have the url field extraction there. But there are no errors. My questions are Where can I find the reason for the failure? I can't seem to find what the actual error is anywhere in the search.log. Any ideas about what's going on here, or documentation that may help?
Hello All, Actually i have an lookup table DIUSERS.csv, i would like to build a query as like below : index=* |inputlookup DIUSERS.csv|stats count by src dest user name action index But its not wo... See more...
Hello All, Actually i have an lookup table DIUSERS.csv, i would like to build a query as like below : index=* |inputlookup DIUSERS.csv|stats count by src dest user name action index But its not working, Please let me know the correct queries.   Thanks.
   We are collecting perfmon information - "Free Megabytes" and "% Free Space".  All is well in the collection on these items.  We have an alerts that alerts us when free space is less than 10 "Free ... See more...
   We are collecting perfmon information - "Free Megabytes" and "% Free Space".  All is well in the collection on these items.  We have an alerts that alerts us when free space is less than 10 "Free Megabytes".  Again, all is well.    I now need to modify the alert to report any hosts where "Free Megabytes" is less than 10 AND "% Free Space" is than 20.  (Numbers are just an example).  I trying but haven't gotten it to work.  Here is what I have in my testing...     sourcetype="Perfmon:Free Disk Space" instance!=_Total counter="% Free Space" Value<20 [ search host=* sourcetype="Perfmon:Free Disk Space" instance!=_Total counter="Free Megabytes" Value<10000 | return host ] | table host, instance, Value       Two concerns.. 1.  I need to look at all hosts all drives but not _Total (which combines) 2.  I need to alert only if "Free Megabytes" < 10 and "% Free Space" <20   Any help would be appreciated.
Hi,  I am having confusion in understanding some portion of following search. Can anyone help me in understanding it please.      index=main | where cidrmatch("192.168.10.1285", src_ip) AND ds... See more...
Hi,  I am having confusion in understanding some portion of following search. Can anyone help me in understanding it please.      index=main | where cidrmatch("192.168.10.1285", src_ip) AND dst_ip="192.168.10.61" OR cidrmatch("192.168.10.1285", dst_ip) AND src_ip="192.168.10.61" OR cidrmatch("192.168.10.1285", src_ip) AND cidrmatch("192.168.10.1285", dst_ip) | bin _time span=1m | eval H=len(_raw) | stats count as W(H) mean(H) stdev(H) BY _time src_ip | join src_ip [search index=main | where cidrmatch("192.168.10.1285", src_ip) AND dst_ip="192.168.10.61" OR cidrmatch("192.168.10.1285", dst_ip) AND src_ip="192.168.10.61" OR cidrmatch("192.168.10.1285", src_ip) AND cidrmatch("192.168.10.1285", dst_ip) | transaction src_ip dst_ip maxevents=2 | bin _time span=1m | eval HH_jit=len(_raw) | stats count as W(HH_jit) mean(HH_jit) stdev(HH_jit) BY _time src_ip dst_ip] | join src_ip [search index=main | where cidrmatch("192.168.10.1285", src_ip) AND dst_ip="192.168.10.61" OR cidrmatch("192.168.10.1285", dst_ip) AND src_ip="192.168.10.61" OR cidrmatch("192.168.10.1285", src_ip) AND cidrmatch("192.168.10.1285", dst_ip) | bin _time span=1m | eval HpHp=len(_raw) | stats count as W(HpHp) mean(HpHp) stdev(HpHp) BY _time src_ip src_port dst_ip dst_port] | table _time W(H) mean(H) stdev(H) W(HH_jit) mean(HH_jit) stdev(HH_jit) W(HpHp) mean(HpHp) stdev(HpHp) magnitude(HpHp) radius(HpHp) covariance(HpHp) correlation(HpHp)      It is used for the extraction of statistical features on the base of time frame like 35ms, 100ms, 1m. I am not understanding what it actually mean by time frame in it. what is the mean of "bin _time span", "eval H=len(_raw)" , "transaction" , "maxevents =2" means ? what is count doing here ?  covariance: An approximated covariance between two streams. what is mean between two streams here?  Here is some information use for aggregating the features   H=packet size transfer in a unidirectional  (host to all) HH_jit = difference in time between transaction with the same IP values(host to host) HpHp= packet transfer from host to host taking ports (host: port to host: port)  I have read from splunk search reference page about these different terms but not getting a clear picture about this particular case.  I need urgent help, i would appreciate a reply as soon as possible.  
I have a field that sometimes has only what appears to be a whatspace.  How would I replace the existing whitespace with a value of "none"?
When can we use SPL2 with the lovely comments options described at https://docs.splunk.com/Documentation/SCS/current/Search/Comments  ?