All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Which is the best way to install Splunk in the Linux Environment?  Please share any easy to follow step-by-step guide or document.     Thanks
Hi experts,   have .CSV file that timestamp is quite a simple integer and its incremental like 1,2,3,,,,  I want to know how to convert the time column(1,2,3,4,,,,) to any time format that woul... See more...
Hi experts,   have .CSV file that timestamp is quite a simple integer and its incremental like 1,2,3,,,,  I want to know how to convert the time column(1,2,3,4,,,,) to any time format that would begin from Jan 1st, 2023 for example. Does anyone have a great idea for it in props.conf? Time AAA BBB CCC DDD 1 1073 29.9360008 121.446498 75 2 1074 29.9360008 121.600296 75 3 1078 29.9360008 122.417319 75
Can anyone explain why the k1 eval token statement does not work, but k2 and k3, which do the same as k1, but in two steps, does.   <eval token="k1">mvindex($row.key$, mvfind($row.name$, $click.v... See more...
Can anyone explain why the k1 eval token statement does not work, but k2 and k3, which do the same as k1, but in two steps, does.   <eval token="k1">mvindex($row.key$, mvfind($row.name$, $click.value2$))</eval> <eval token="k2">mvfind($row.name$, $click.value2$)</eval> <eval token="k3">mvindex($row.key$, $k2$)</eval>   Requirements are: 2 MV fields in a single row with keys in one field and names in the other. drilldown is cell and click.value2 is the clicked name (key column is hidden). I'm trying to grab the corresponding key for the clicked name. I finally got k2/k3 combination working, but am puzzled why k1 does not work and don't know how to diagnose. Here's an example dashboard.   <dashboard> <label>MV Click</label> <row> <panel> <table> <search> <query>| makeresults | fields - _time | eval name=split("ABCDEFGHIJKL", "") | eval key=lower(name) | table name key</query> <earliest>@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <fields>name</fields> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <drilldown> <eval token="k1">mvindex($row.key$, mvfind($row.name$, $click.value2$))</eval> <eval token="k2">mvfind($row.name$, $click.value2$)</eval> <eval token="k3">mvindex($row.key$, $k2$)</eval> <set token="name">$click.value2$</set> <set token="names">$row.name$</set> <set token="keys">$row.key$</set> </drilldown> </table> </panel> <panel> <html> <h2>Clicked name=$name$</h2><p/> <h2>Names=$names$</h2> <h2>Keys=$keys$</h2><p/> <h3>&lt;eval token="k1">mvindex($row.key$, mvfind($row.name$, $click.value2$))&lt;/eval> = $k1$</h3> <h3>&lt;eval token="k2">mvfind($row.name$, $click.value2$)&lt;/eval> = $k2$</h3> <h3>&lt;eval token="k3">mvindex($row.key$, $$k2$$)&lt;/eval> = $k3$</h3> </html> </panel> </row> </dashboard>  
Hello Splunkers , I have single machine splunk infrastructure. What stanzas I need to provide in indexes.conf for a index such that  I need to have data in the below order   Hot / Warm = 14 days... See more...
Hello Splunkers , I have single machine splunk infrastructure. What stanzas I need to provide in indexes.conf for a index such that  I need to have data in the below order   Hot / Warm = 14 days Cold= 10 months Frozen=1month Also I have following questions 1.I see that  hot are warm buckets are in the following location $SPLUNK_HOME/var/lib/splunk/defaultdb/db/* How would we know or differentiate between hot and warm buckets or all look same? 2.Also once the policy of warm bucket is reached like the size or time will the cold location create by itself or should we create manually ($SPLUNK_HOME/var/lib/splunk/defaultdb/colddb/*) I am pretty new to splunk  so can you please help in what should be the stanzas that I should in order to achieve 14 days hot/warm and  10 months in cold and  1 month in frozen 3.what happens if  we have a year worth of data in the hot/warm   4.How to back up data everyday?...should we copy the buckets everyday and store in a separate storage and if any disaster occurs if we place back the buckets from storage to warm and cold...will we see data as before? Thanks, mz9j  
We recently updated our AWS Add-on to version 6.3, after which all Generic S3 inputs stopped ingesting.  Noticed the following error being repeated during every S3 API call.    "parse_csv_with_delim... See more...
We recently updated our AWS Add-on to version 6.3, after which all Generic S3 inputs stopped ingesting.  Noticed the following error being repeated during every S3 API call.    "parse_csv_with_delimiter'" The data within our s3 bucket was a tar or .gz extension with either json or xml format,  after the upgrade our previous AWS s3 inputs seemed to have defaulted to csv format. Ended up recreating several  AWS Generic S3 inputs using a start date from when the Add-on was updated, which allowed the previous missed logs to ingest again. You can run this search to determine if your system is having a similar issue. index=_internal level=ERROR ErrorDetail="'parse_csv_with_delimiter'"    
Hello I have a Splunk query that looks like following:   index=something "*abc*" OR "*def*" OR "*hig*"   These substrings do not belong to particular fields. Is there a way to put them in a l... See more...
Hello I have a Splunk query that looks like following:   index=something "*abc*" OR "*def*" OR "*hig*"   These substrings do not belong to particular fields. Is there a way to put them in a lookup table? If they were field values, I would've done something like:    index=something [| inputlookup My.csv | fields FieldName | format]    
Hi, Greetings I'm trying to add a search heads to an existing cluster by updating the server.conf file. To be more specific I'm adding three search head. One search head added successfully, but... See more...
Hi, Greetings I'm trying to add a search heads to an existing cluster by updating the server.conf file. To be more specific I'm adding three search head. One search head added successfully, but when I repeat the same steps in other two search heads. It doesn't joins the cluster. I see the below is the sout when Splunk is restarted. Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes... Validated: _audit _internal _introspection _telemetry _thefishbucket history main summary Done Bypassing local license checks since this instance is configured with a remote license master. Checking filesystem compatibility... Done Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from '/opt/splunk/splunk-7.1.1-8f0ead9ec3db-linux-2.6-x86_64-manifest' All installed files intact. Done Checking replication_port port [8090]: open All preliminary checks passed. Starting splunk server daemon (splunkd)... Done [ OK ] Waiting for web server at http://127.0.0.1:8000 to be available........... WARNING: web interface does not seem to be available! Please advise. Thanks, CG
Hey all, I'm attempting to compare a variable (we'll call it cDOW), which is set to (strftime(now(), "%A")),  to a DOW field in a lookup file which contains 1 or more days of the week. Here is what I... See more...
Hey all, I'm attempting to compare a variable (we'll call it cDOW), which is set to (strftime(now(), "%A")),  to a DOW field in a lookup file which contains 1 or more days of the week. Here is what I am using currently to include fields in the results which have a DOM or DOW field, or which have them filled with NA:     | eval cDOM=strftime(now(), "%d") | eval cDOW=strftime(now(), "%A") | where (DOM like cDOM OR DOM="NA") AND (DOW like cDOW OR DOW="NA")     This works fine for fields which match exactly (e.g. DOW=Wednesday, cDOW=Wednesday), but does not work if the DOW field contains multiple days of the week (as many will due to this lookup file being a schedule of jobs). the DOM field will only ever have the exact number day of the month, but the DOW field will often contain 1-5 days, and I'd like to have this 'where' statement return fields which contain the current day of week regardless of how many days are listed. I've tried utilizing wildcards, but can't syntactically figure this out since it's comparing an eval variable to a lookup field and there is no static values. Trying to append wildcards to a relative time in the where statement itself also does not work syntactically.   Any ideas on how to easily accomplish this?
I have two different sources with different fields.  Let's call them sourcetypeA and sourcetypeB.  Some fields that I wanted to dedup do not overlap.  Let's say sfieldA only exists in sourcetypeA, sf... See more...
I have two different sources with different fields.  Let's call them sourcetypeA and sourcetypeB.  Some fields that I wanted to dedup do not overlap.  Let's say sfieldA only exists in sourcetypeA, sfieldB only exists in sourcetypeB.  My intention is to have a single search (without append) to return events from both sources that contain unique sfieldA in sourcetypeA and unique sfieldB in sourcetypeB. I was initially surprised that the following returned no event: sourcetype = sourcetypeA OR sourcetype = sourcetype B | dedup sfieldA sfieldB Then, I realized that this is to ask for dedup on nonexistent keys.  My question is, then: Is there a syntax to express my intent?
If space is not really an issue are there any other reasons to have the search factor lower that the replication factor? Thanks.
Hi All, how can i know whether props has been defined to particular sourcetype. how can i check it.
I need to create a pie chart from two different searches/indexes. I gave two separate queries that show the total number from my search results.   Query 1: index="first_index" | stats count... See more...
I need to create a pie chart from two different searches/indexes. I gave two separate queries that show the total number from my search results.   Query 1: index="first_index" | stats count by auth.metadata.role_name | rex field=auth.metadata.role_name | dedup auth.metadata.role_name | stats count  Query 2: index="second_index" sourcetype="mysource" (request.path="my/path/*" OR request.path="my/path/sign/*") NOT (request.path="not/my/path" OR request.path="also/not/my/path") response | eval expired=if((now() > 'response.data.expiration'),1,0) | table _time, request.data.common_name, expired, auth.metadata.role_name | rename request.data.common_name as cn | search "auth.metadata.role_name"="my_role_name" | table cn | dedup cn | stats count   Where query 1 is = 100%   How can I make query 2 show a percentage using query 1 as the 100% i.e if query 1 stats count  = 150 and query 2 stats count  = 75 then query 2 should print 50 %   Thanks
When I generate notable "for each result" the max number of notables is 250 or 500 I want all results to produce an notable
This is my sample event onlinequoteinguser 2023-01-11T10:27:13,843 INFO DigitalPortal.xxxeSubmissionUtil {"hostName": "xxx80hlxxda044", "SourceSystem": "null", "level": "INFO", "message": "Start... See more...
This is my sample event onlinequoteinguser 2023-01-11T10:27:13,843 INFO DigitalPortal.xxxeSubmissionUtil {"hostName": "xxx80hlxxda044", "SourceSystem": "null", "level": "INFO", "message": "Start | newSubmission", "serverId": "prod-xxx_xx78", "userId": "onlinequoteinguser", "contextMap": [ {"JsonRpcId":"b55296cf-81e1-4xxx-8064-052dxx416725_5"}, {"methodName":"createOrUpdateDraftSubmission"}, {"traceabilityID":"7cxxx367-09aa-4367-87d4-b120526xxxcb"}, {"requestPath":"\/edge\/xxxquoteflow\/letsgetstarted"}], "applicationName": "xx", "timestamp": "20230111T102713.841-0500"} here is my query to retrieve specific event based my my JSON field  index=app_xx Appid="APP-xxxx" Environment=PROD "contextMap{}.methodName"="createOrUpdateDraftSubmission"    How to make appropriate search?
I've been trying to find an answer to this, and it seems like it's supposed to work. So I'm not sure if I have a misconfiguration or if I'm doing something wrong. I have a deployment server setup ... See more...
I've been trying to find an answer to this, and it seems like it's supposed to work. So I'm not sure if I have a misconfiguration or if I'm doing something wrong. I have a deployment server setup to deploy 1 or 2 apps.  If I go to a machine that has the deployment server configured, and has had the configured app installed (Through the deployment server), and I delete configuration files, etc; the files never get replaced by the deployment server. It will replace files if i delete the entire app folder... but not individual files.   Would anyone have any clues on why this is happening? I have tried the "reload deploy-server" command but it didn't seem to do anything.  Am I being unrealistic to assume individual files would also be checked against the deployment server app?  I want to ensure that inputs, outputs, etc are uniform. And if some were to get deleted or changed, I would need the change to get pushed; for example if someone went in and deleted conf files intentionally or accidently.
How do I find out how many heavy forwarder licenses/instances I can install?
I have written an addon that gets data from an API and yields rows from it one by one in a loop. I use the GeneratingCommand class from splunklib. When I run it in the search-head, it runs for a time... See more...
I have written an addon that gets data from an API and yields rows from it one by one in a loop. I use the GeneratingCommand class from splunklib. When I run it in the search-head, it runs for a time, and then returns all the rows at once. So it seems that Splunk buffers the results until the whole process is complete, even though the addon code itself has no such buffering. Is it possible to have it show the rows in the search-head as they are yielded, similar to how a normal Splunk search does?
We are currently experiencing an issue in our 9.0.2 environment where our syslog UFs are unable to connect to our indexers. When we take a look at the splunkd.log on our syslog servers we see: WARN ... See more...
We are currently experiencing an issue in our 9.0.2 environment where our syslog UFs are unable to connect to our indexers. When we take a look at the splunkd.log on our syslog servers we see: WARN AutoLoadBalancedConnectionStrategy [3438113 TcpOutEloop] - Cooked connection to ip=xxx.xxx.xxx.xxx:9997 timed out These servers are in the same VRF so there is no firewall in-between, we have useACK and autoBatch set to false for the 9.x workaround, and the indexers are receiving all data from our non-syslog UFs. These syslog servers had been working just fine up until a day or two ago. If anyone has additional t/s suggestions that'd be much appreciated
Hello Everyone, We have query regarding the cold storage and archive is it possible to archive the splunk logs in single site in distributed multisite environment, If yes can you share the any docu... See more...
Hello Everyone, We have query regarding the cold storage and archive is it possible to archive the splunk logs in single site in distributed multisite environment, If yes can you share the any documents to configure the same. At one of customer,  do not want the cold storage is It possible to configure the cold storage only at DC and at DR we can skip and we just need configure in hot storage path in indexes.conf. Let us know if it’s possible to configure above mentioned scenario
Hi, suppose I have a multi-value field which represents names, which can have different values in each event. for example: names (ordered by time desc): event 1: Emma, Dan, Mike event 2: Dan, Pat... See more...
Hi, suppose I have a multi-value field which represents names, which can have different values in each event. for example: names (ordered by time desc): event 1: Emma, Dan, Mike event 2: Dan, Patrick event 3: Mike, Olivia In addition, I have another multi-value field which represent the correspond people's grades (correspond by order): grades (ordered by time desc): event 1: 80, 70, 100 event 2: 90, 75 event 3: 88, 95 I would like to take for each person his last grade (i.e take all the ever seen people without duplications). My result should look like: Emma 80 Dan 70 Mike 100 Patrick 75 Olivia 95