All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

  I have a SPL search that returns a field with multiple values (names of lookups). I want to concat the lookup name and it's origin app, found with the search below:       | rest splunk_server=loc... See more...
  I have a SPL search that returns a field with multiple values (names of lookups). I want to concat the lookup name and it's origin app, found with the search below:       | rest splunk_server=local /servicesNS/-/-/saved/searches | table title eai:acl.app eai:acl.owner search | where match(search,"outputlookup\s+lookupname")  
Hi! I want to know if is possible to get duplicated ingestion of logs between Splunk Enterprise and Splunk enterprise security,  also the availability of the logs of Splunk enterprise in searches m... See more...
Hi! I want to know if is possible to get duplicated ingestion of logs between Splunk Enterprise and Splunk enterprise security,  also the availability of the logs of Splunk enterprise in searches made on Splunk Enterprise security. and in general how this work on an indexer level.
I am having trouble expressing multiple average windows in a table form.  My table shows the same values for myval, five_min_val, fifteen_min_val for each host.  I can get some of what I want from ti... See more...
I am having trouble expressing multiple average windows in a table form.  My table shows the same values for myval, five_min_val, fifteen_min_val for each host.  I can get some of what I want from timechart and trellis layout on each of the aggregations for a single host, but I really would like to look at the data across hundreds of hosts, where the value is above some threshold over 15 minutes.  I tried trendline and sma5, sma15 to represent the 5 min and 15 min simple moving averages with similar effect. Please enlighten me?     <base search> | fields _time host myval | bins span=1m _time | streamstats window=5 avg(myval) as five_min_val by host | streamstats window=15 avg(myval) as fifteen_min_val by host | stats latest(myval) as myval latest(five_min_val) as five_min_val latest(fifteen_min_val) as fifteen_min_val by host | table host myval five_min_val fifteen_min_val    
Hello Splunkers,   I am currently facing a problem and can't find any documentation. Let me explain, we are using the Splunk_TA_o365 mostly for sign-in logs. The issues is that any of this logs... See more...
Hello Splunkers,   I am currently facing a problem and can't find any documentation. Let me explain, we are using the Splunk_TA_o365 mostly for sign-in logs. The issues is that any of this logs have the right timestamp.  For these sign in logs, the timestamp is stored in the "createdDateTime" field, and not in the "timestamp" field like other events. So I tried to "fix" it with the local/props.conf with the stanza :    [o365:graph:api] TIME_PREFIX = ("createdDateTime":\s*")|timestamp TIME_FORMAT = %Y-%m-%dT%H:%M:%S KV_MODE = json TZ = UTC     And it didn't work at all, but when I tried (and I know it is REALLY not recommended in the best practice) to write the same stanza in the default/props.conf, it surprizingly worked. So I was wondering if it was a normal behavior (which I'd find strange), or if there is another solution that could be more sustainable than modifying the default folder. Thanks in advance for your time, Best regards and Happy splunking! 
I would like to fit an ARIMA model to my data with a search something like this: <base search> | timechart span=5m avg(value) as value by some_field The problem here is that, the number of field... See more...
I would like to fit an ARIMA model to my data with a search something like this: <base search> | timechart span=5m avg(value) as value by some_field The problem here is that, the number of field that returns by this search is dynamic, so it can return 5 fields one day but it could also return 3 or 7 the other day for instance. I would like to fit an ARIMA model to all the fields that is returned by that search. What I found was the foreach command where you iterate over fields : | foreach * [eval '<<FIELD>>' = ... ]   However, when I try to use the fit command instead of eval, I get an error message saying:  Error in 'foreach' command: Search pipeline may not contain non-streaming commands Since foreach cannot contain non-streaming commands.   Is there a way to come around this issue?
Hi,    I want to know how to differentiate between logs from a productive versus a non-productive license.   Thnk you    Matilda
Hi all, Recently I have been working on getting a query that can help me identify the execution of malicious documents which make use of "T1036.002: Masquerading (Right-to-Left Override)".  "Adv... See more...
Hi all, Recently I have been working on getting a query that can help me identify the execution of malicious documents which make use of "T1036.002: Masquerading (Right-to-Left Override)".  "Adversaries may manipulate features of an artifact to mask its true intentions/make it seem legitimate. One technique that could be employed to achieve this is right-to-left character override (RTLO). RTLO is a non-printing Unicode character that causes the text that follows to be displayed in reverse. Detection of this technique involves monitoring filenames for commonly used RTLO character formats such as \u202E, [U+202E], and %E2%80%AE." My current query does not work and simply shows all file names from the Image field: index=* | eval file_name=replace(Image,"(.*\\\)","") | rex field=file_name "(?i)(?<hex_field>202e)" | search NOT (hex_field="") | dedup file_name | table file_name, hex_field, Image   Image Field: C:\Users\Administrator.BARTERTOWNGROUP\Desktop\‮cod.3aka3.scr Note here that the rcs.3aka3.doc is RTL not LTR. Does anyone have any idea how to achieve such filtering?
Hello everyone, I am trying to configure Splunk DB connect app, and getting next one error in logs: 2023-01-12T14:46:28+0300 [ERROR] [settings.py], line 89 : Throwing an exception Traceback (most ... See more...
Hello everyone, I am trying to configure Splunk DB connect app, and getting next one error in logs: 2023-01-12T14:46:28+0300 [ERROR] [settings.py], line 89 : Throwing an exception Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/rest/settings.py", line 76, in handle_POST self.validate_java_home(payload["javaHome"]) File "/opt/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/rest/settings.py", line 220, in validate_java_home raise Exception(reason) Exception: PermissionError: [Errno 13] Permission denied: '/usr/java/jdk1.8.0_311-amd64/bin/java' validate java command: /usr/java/jdk1.8.0_311-amd64/bin/java.   Folders /usr/java/jdk1.8.0_311-amd64 has owner splunk -  drwxr-xr-x. 9 splunk splunk 107 Jan 12 12:54 jdk1.8.0_311-amd64 Folder /opt/splunk/etc/apps/splunk_app_db_connect has owner splunk too - drwx------. 18 root root 4096 Jan 12 13:44 splunk_app_db_connect   My splunk is working from user splunk grep SPLUNK_OS_USER= /opt/splunkforwarder/etc/splunk-launch.conf SPLUNK_OS_USER=splunk   What is the reason, can anyone help?
Hi at all, I found an issue in iplocation database on Splunk Cloud: if i use iplocation for many IP address (e.g. 147.161.244.186) I find Sao Paulo (Brazil) but using whois I find Amsterdam (NL) tha... See more...
Hi at all, I found an issue in iplocation database on Splunk Cloud: if i use iplocation for many IP address (e.g. 147.161.244.186) I find Sao Paulo (Brazil) but using whois I find Amsterdam (NL) that's the correct answer. It could be a bug or an update problem, has anyone  experienced this issue? and is there  anyone that knows how to update iplocation database on Splunk Cloud: it shouldn't be possible. How can I intervene? Thank you for your support. Ciao. Giuseppe
Upgrade Readiness App added to the Splunk Cloud Platform shows the following two errors. 1. Search peer SSL config check 2. MongoDB TLS and DNS validation check The following knowledge says ... See more...
Upgrade Readiness App added to the Splunk Cloud Platform shows the following two errors. 1. Search peer SSL config check 2. MongoDB TLS and DNS validation check The following knowledge says that "server.conf" needs to be configured, but I think that is not possible with Splunk Cloud. https://community.splunk.com/t5/Security/Search-peer-SSL-config-check-How-to-resolve-these-errors-that/m-p/618002#M16425   Can these errors be ignored? Splunk Cloud Platform version is "Version:9.0.2209.1" Regard,
Is there a possibility to define a standard set of health rules similar to the ones that are auto created when you create a new application? Scenario: When a new application reports to AppDynamics,... See more...
Is there a possibility to define a standard set of health rules similar to the ones that are auto created when you create a new application? Scenario: When a new application reports to AppDynamics, a standard set of health rules with custom names are applied to these applications. I understand we could create these using the API but it would be a post identification of a new application that it would be created. The default rules seem to be picked up somewhere from the default controller configuration.
Which is the best way to install Splunk in the Linux Environment?  Please share any easy to follow step-by-step guide or document.     Thanks
Hi experts,   have .CSV file that timestamp is quite a simple integer and its incremental like 1,2,3,,,,  I want to know how to convert the time column(1,2,3,4,,,,) to any time format that woul... See more...
Hi experts,   have .CSV file that timestamp is quite a simple integer and its incremental like 1,2,3,,,,  I want to know how to convert the time column(1,2,3,4,,,,) to any time format that would begin from Jan 1st, 2023 for example. Does anyone have a great idea for it in props.conf? Time AAA BBB CCC DDD 1 1073 29.9360008 121.446498 75 2 1074 29.9360008 121.600296 75 3 1078 29.9360008 122.417319 75
Can anyone explain why the k1 eval token statement does not work, but k2 and k3, which do the same as k1, but in two steps, does.   <eval token="k1">mvindex($row.key$, mvfind($row.name$, $click.v... See more...
Can anyone explain why the k1 eval token statement does not work, but k2 and k3, which do the same as k1, but in two steps, does.   <eval token="k1">mvindex($row.key$, mvfind($row.name$, $click.value2$))</eval> <eval token="k2">mvfind($row.name$, $click.value2$)</eval> <eval token="k3">mvindex($row.key$, $k2$)</eval>   Requirements are: 2 MV fields in a single row with keys in one field and names in the other. drilldown is cell and click.value2 is the clicked name (key column is hidden). I'm trying to grab the corresponding key for the clicked name. I finally got k2/k3 combination working, but am puzzled why k1 does not work and don't know how to diagnose. Here's an example dashboard.   <dashboard> <label>MV Click</label> <row> <panel> <table> <search> <query>| makeresults | fields - _time | eval name=split("ABCDEFGHIJKL", "") | eval key=lower(name) | table name key</query> <earliest>@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <fields>name</fields> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <drilldown> <eval token="k1">mvindex($row.key$, mvfind($row.name$, $click.value2$))</eval> <eval token="k2">mvfind($row.name$, $click.value2$)</eval> <eval token="k3">mvindex($row.key$, $k2$)</eval> <set token="name">$click.value2$</set> <set token="names">$row.name$</set> <set token="keys">$row.key$</set> </drilldown> </table> </panel> <panel> <html> <h2>Clicked name=$name$</h2><p/> <h2>Names=$names$</h2> <h2>Keys=$keys$</h2><p/> <h3>&lt;eval token="k1">mvindex($row.key$, mvfind($row.name$, $click.value2$))&lt;/eval> = $k1$</h3> <h3>&lt;eval token="k2">mvfind($row.name$, $click.value2$)&lt;/eval> = $k2$</h3> <h3>&lt;eval token="k3">mvindex($row.key$, $$k2$$)&lt;/eval> = $k3$</h3> </html> </panel> </row> </dashboard>  
Hello Splunkers , I have single machine splunk infrastructure. What stanzas I need to provide in indexes.conf for a index such that  I need to have data in the below order   Hot / Warm = 14 days... See more...
Hello Splunkers , I have single machine splunk infrastructure. What stanzas I need to provide in indexes.conf for a index such that  I need to have data in the below order   Hot / Warm = 14 days Cold= 10 months Frozen=1month Also I have following questions 1.I see that  hot are warm buckets are in the following location $SPLUNK_HOME/var/lib/splunk/defaultdb/db/* How would we know or differentiate between hot and warm buckets or all look same? 2.Also once the policy of warm bucket is reached like the size or time will the cold location create by itself or should we create manually ($SPLUNK_HOME/var/lib/splunk/defaultdb/colddb/*) I am pretty new to splunk  so can you please help in what should be the stanzas that I should in order to achieve 14 days hot/warm and  10 months in cold and  1 month in frozen 3.what happens if  we have a year worth of data in the hot/warm   4.How to back up data everyday?...should we copy the buckets everyday and store in a separate storage and if any disaster occurs if we place back the buckets from storage to warm and cold...will we see data as before? Thanks, mz9j  
We recently updated our AWS Add-on to version 6.3, after which all Generic S3 inputs stopped ingesting.  Noticed the following error being repeated during every S3 API call.    "parse_csv_with_delim... See more...
We recently updated our AWS Add-on to version 6.3, after which all Generic S3 inputs stopped ingesting.  Noticed the following error being repeated during every S3 API call.    "parse_csv_with_delimiter'" The data within our s3 bucket was a tar or .gz extension with either json or xml format,  after the upgrade our previous AWS s3 inputs seemed to have defaulted to csv format. Ended up recreating several  AWS Generic S3 inputs using a start date from when the Add-on was updated, which allowed the previous missed logs to ingest again. You can run this search to determine if your system is having a similar issue. index=_internal level=ERROR ErrorDetail="'parse_csv_with_delimiter'"    
Hello I have a Splunk query that looks like following:   index=something "*abc*" OR "*def*" OR "*hig*"   These substrings do not belong to particular fields. Is there a way to put them in a l... See more...
Hello I have a Splunk query that looks like following:   index=something "*abc*" OR "*def*" OR "*hig*"   These substrings do not belong to particular fields. Is there a way to put them in a lookup table? If they were field values, I would've done something like:    index=something [| inputlookup My.csv | fields FieldName | format]    
Hi, Greetings I'm trying to add a search heads to an existing cluster by updating the server.conf file. To be more specific I'm adding three search head. One search head added successfully, but... See more...
Hi, Greetings I'm trying to add a search heads to an existing cluster by updating the server.conf file. To be more specific I'm adding three search head. One search head added successfully, but when I repeat the same steps in other two search heads. It doesn't joins the cluster. I see the below is the sout when Splunk is restarted. Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes... Validated: _audit _internal _introspection _telemetry _thefishbucket history main summary Done Bypassing local license checks since this instance is configured with a remote license master. Checking filesystem compatibility... Done Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from '/opt/splunk/splunk-7.1.1-8f0ead9ec3db-linux-2.6-x86_64-manifest' All installed files intact. Done Checking replication_port port [8090]: open All preliminary checks passed. Starting splunk server daemon (splunkd)... Done [ OK ] Waiting for web server at http://127.0.0.1:8000 to be available........... WARNING: web interface does not seem to be available! Please advise. Thanks, CG
Hey all, I'm attempting to compare a variable (we'll call it cDOW), which is set to (strftime(now(), "%A")),  to a DOW field in a lookup file which contains 1 or more days of the week. Here is what I... See more...
Hey all, I'm attempting to compare a variable (we'll call it cDOW), which is set to (strftime(now(), "%A")),  to a DOW field in a lookup file which contains 1 or more days of the week. Here is what I am using currently to include fields in the results which have a DOM or DOW field, or which have them filled with NA:     | eval cDOM=strftime(now(), "%d") | eval cDOW=strftime(now(), "%A") | where (DOM like cDOM OR DOM="NA") AND (DOW like cDOW OR DOW="NA")     This works fine for fields which match exactly (e.g. DOW=Wednesday, cDOW=Wednesday), but does not work if the DOW field contains multiple days of the week (as many will due to this lookup file being a schedule of jobs). the DOM field will only ever have the exact number day of the month, but the DOW field will often contain 1-5 days, and I'd like to have this 'where' statement return fields which contain the current day of week regardless of how many days are listed. I've tried utilizing wildcards, but can't syntactically figure this out since it's comparing an eval variable to a lookup field and there is no static values. Trying to append wildcards to a relative time in the where statement itself also does not work syntactically.   Any ideas on how to easily accomplish this?
I have two different sources with different fields.  Let's call them sourcetypeA and sourcetypeB.  Some fields that I wanted to dedup do not overlap.  Let's say sfieldA only exists in sourcetypeA, sf... See more...
I have two different sources with different fields.  Let's call them sourcetypeA and sourcetypeB.  Some fields that I wanted to dedup do not overlap.  Let's say sfieldA only exists in sourcetypeA, sfieldB only exists in sourcetypeB.  My intention is to have a single search (without append) to return events from both sources that contain unique sfieldA in sourcetypeA and unique sfieldB in sourcetypeB. I was initially surprised that the following returned no event: sourcetype = sourcetypeA OR sourcetype = sourcetype B | dedup sfieldA sfieldB Then, I realized that this is to ask for dedup on nonexistent keys.  My question is, then: Is there a syntax to express my intent?