All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi there! I need a query, that will show me Top Sourcetype Sizes by Day, where sourcetype=kubernetes_logs, and the kubernetes_logs itself, to divide by service names (or namespace names). RIght n... See more...
Hi there! I need a query, that will show me Top Sourcetype Sizes by Day, where sourcetype=kubernetes_logs, and the kubernetes_logs itself, to divide by service names (or namespace names). RIght now, I'm using this query - index=_internal source=*license_usage.log type="Usage" | eval indexname = if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | eval sourcetypename = st | bin _time span=1d | stats sum(b) as b by _time, pool, indexname, sourcetypename | eval GB=round(b/1024/1024/1024, 3) | fields _time, indexname, sourcetypename, GB | sort by GB | reverse But how do I exclude only kubernetes_logs from here, and divide it by service names? Thanks!
hi I use the search below in order to display a timechart which count the number of host which are in a cpu range consumption (0 - 20, 20 -40, 40 - 60) `CPU` earliest=-30d latest=now | fie... See more...
hi I use the search below in order to display a timechart which count the number of host which are in a cpu range consumption (0 - 20, 20 -40, 40 - 60) `CPU` earliest=-30d latest=now | fields process_cpu_used_percent host | eval cpu_range=case(process_cpu_used_percent>0 AND process_cpu_used_percent <=20,"0-20", process_cpu_used_percent>20 AND process_cpu_used_percent <=40,"20-40", process_cpu_used_percent>40 AND process_cpu_used_percent <=60,"40-60") | timechart span=1d dc(host) as host by cpu_range I need to changes : 1) Instead counting the number of process_cpu_used_percent by host in a cpu range, I need to count the number of the process_used_percent average by host in a cpu range 2) Is is possible to take only the evnts which are in a specific slot time? (between 8h and 17h) thanks a lot for your help
Hello, I'm new to Splunk so sorry if this seems like a basic question. Previously, in my search I was listing various sources in the query itself: index=my_index host=my_host source="co... See more...
Hello, I'm new to Splunk so sorry if this seems like a basic question. Previously, in my search I was listing various sources in the query itself: index=my_index host=my_host source="comp_1.log" OR source="comp_2.log" OR ...) "keyword I'm looking for in event" However, that was getting difficult to maintain and doesn't really fit my requirements so I have now moved my sources to a lookup file with a structure like this: sources.csv source, "comp_1.log" "comp_2.log" ... "comp_n.log" My question is can I use these values in a search in a similar way to how I would use tokens? I tried something like this but am not getting any results |inputlookup sources.csv | search index=my_index host=my_host source=source "keyword I'm looking for in event" I'm sure this is something that can be done and that I'm just making a mistake somewhere.
My search is running slow. I have a live dashboard and it is populated by a query in my search. I am new to Splunk but I managed to develop a dashboard project. I'm working on macros and I was wonder... See more...
My search is running slow. I have a live dashboard and it is populated by a query in my search. I am new to Splunk but I managed to develop a dashboard project. I'm working on macros and I was wondering how should I develop a macro search to optimize my search, if that is even possible. index="TEM_dashboard_main"|eval displayValue=case(TestResult_Value == "PASSED", "low", TestResult_Value == "FAILED", "severe") |dedup Application_Name, TestCase_Value, SwimLane_Value, TestResult_Value |sort Application_Name, TestCase_Value |append [|dbxquery query="select distinct a.Application_Name, t.TestCase_Value, 'QA1','QA2','QA3','QA4','QA5','QA6','QA7','STG','STG2','PVE' from TEM_Application a left Outer Join Dashboard_TEM_Application d ON a.Application_Id = d.Application_Id left Outer Join TEM_TestCase t ON d.TestCase_Id = t.TestCase_Id left join [AZLIFEMazl6n2j].[Splunk] V on t.TestCase_Value = V.TestCase_Value and V.TestCase_Value is null where a.Application_Name is not NULL AND d.Active = 1" connection="TEM_Database" timeout=600] |eval QA1 = case(like(TestResult_Value,"PASSED") AND SwimLane_Value=="QA1","low",like(TestResult_Value,"FAILED") AND SwimLane_Value=="QA1","severe", like(TestResult_Value,"NA") AND SwimLane_Value=="QA1","NA") |eval QA2 = case(like(TestResult_Value,"PASSED") AND SwimLane_Value=="QA2","low",like(TestResult_Value,"FAILED") AND SwimLane_Value=="QA2","severe", like(TestResult_Value,"NA") AND SwimLane_Value=="QA2","NA") |eval QA3 = case(like(TestResult_Value,"PASSED") AND SwimLane_Value=="QA3","low",like(TestResult_Value,"FAILED") AND SwimLane_Value=="QA3","severe", like(TestResult_Value,"NA") AND SwimLane_Value=="QA3","NA") |eval QA4 = case(like(TestResult_Value,"PASSED") AND SwimLane_Value=="QA4","low",like(TestResult_Value,"FAILED") AND SwimLane_Value=="QA4","severe", like(TestResult_Value,"NA") AND SwimLane_Value=="QA4","NA") |eval QA5 = case(like(TestResult_Value,"PASSED") AND SwimLane_Value=="QA5","low",like(TestResult_Value,"FAILED") AND SwimLane_Value=="QA5","severe", like(TestResult_Value,"NA") AND SwimLane_Value=="QA5","NA") |eval QA6 = case(like(TestResult_Value,"PASSED") AND SwimLane_Value=="QA6","low",like(TestResult_Value,"FAILED") AND SwimLane_Value=="QA6","severe", like(TestResult_Value,"NA") AND SwimLane_Value=="QA6","NA") |eval QA7 = case(like(TestResult_Value,"PASSED") AND SwimLane_Value=="QA7","low",like(TestResult_Value,"FAILED") AND SwimLane_Value=="QA7","severe", like(TestResult_Value,"NA") AND SwimLane_Value=="QA7","NA") |eval STG = case(like(TestResult_Value,"PASSED") AND SwimLane_Value=="STG","low",like(TestResult_Value,"FAILED") AND SwimLane_Value=="STG","severe", like(TestResult_Value,"NA") AND SwimLane_Value=="STG","NA") |eval STG2 = case(like(TestResult_Value,"PASSED") AND SwimLane_Value=="STG2","low",like(TestResult_Value,"FAILED") AND SwimLane_Value=="STG2","severe", like(TestResult_Value,"NA") AND SwimLane_Value=="STG2","NA") |eval PVE = case(like(TestResult_Value,"PASSED") AND SwimLane_Value=="PVE","low",like(TestResult_Value,"FAILED") AND SwimLane_Value=="PVE","severe", like(TestResult_Value,"NA") AND SwimLane_Value=="PVE","NA") |table Application_Name, TestCase_Value, QA1,QA2,QA3,QA4,QA5,QA6,QA7,STG,STG2,PVE |rename TestCase_Value AS "Test Case" |rename Application_Name AS "Application Name" |stats values(QA1) as QA1, values(QA2) as QA2,values(QA3) as QA3,values(QA4) as QA4,values(QA5) as QA5,values(QA6) as QA6,values(QA7) as QA7,values(STG) as STG,values(STG2) as STG2,values(PVE) as PVE by "Application Name", "Test Case" |eval QA1 = if((mvjoin(QA1, ",") == "low,severe" OR mvjoin(QA1, ",") == "severe,low"), "elevated", QA1) |eval QA2 = if((mvjoin(QA2, ",") == "low,severe" OR mvjoin(QA2, ",") == "severe,low"), "elevated", QA2) |eval QA4 = if((mvjoin(QA4, ",") == "low,severe" OR mvjoin(QA4, ",") == "severe,low"), "elevated", QA4) |eval QA5 = if((mvjoin(QA5, ",") == "low,severe" OR mvjoin(QA5, ",") == "severe,low"), "elevated", QA5) |eval QA6 = if((mvjoin(QA6, ",") == "low,severe" OR mvjoin(QA6, ",") == "severe,low"), "elevated", QA6) |eval QA7 = if((mvjoin(QA7, ",") == "low,severe" OR mvjoin(QA7, ",") == "severe,low"), "elevated", QA7) |eval STG = if((mvjoin(STG, ",") == "low,severe" OR mvjoin(STG, ",") == "severe,low"), "elevated", STG) |eval STG2 = if((mvjoin(STG2, ",") == "low,severe" OR mvjoin(STG2, ",") == "severe,low"), "elevated", STG2) |eval PVE = if((mvjoin(PVE, ",") == "low,severe" OR mvjoin(PVE, ",") == "severe, low"), "elevated", PVE) |eval QA3 = if((mvjoin(QA3, ",") == "low,severe" OR mvjoin(QA3, ",") == "severe,low"), "elevated", QA3)
Hi everyone, so I am wondering if it is possible to display my results as a string for computername instead of displaying it as a number. I don't believe using count or stats is the right process h... See more...
Hi everyone, so I am wondering if it is possible to display my results as a string for computername instead of displaying it as a number. I don't believe using count or stats is the right process here, but I was wondering if someone can help me edit my command to do what I want So below is the stats command and I want to see the results by user along with WHAT computername and WHAT Host as a string | stats count as total_count count(eval(EventCode="4625")) as denied_count count(eval(EventCode="4624" OR EventCode="4768" OR EventCode="4776")) as permitted_count count(eval(host)) as host count(eval(ComputerName)) as computer by user
Hello Splunkers, So I am having trouble with some json nested arrays that contain multiple latitude and longitude in one event. Is there any way that I can split this one event up into 4 single ... See more...
Hello Splunkers, So I am having trouble with some json nested arrays that contain multiple latitude and longitude in one event. Is there any way that I can split this one event up into 4 single events?
Hi i am trying to send logs to splunk with HEC using logstash, but configuration is not working. A curl from the server is working but logs arent going through logstaash. curl -k "https://splunk-... See more...
Hi i am trying to send logs to splunk with HEC using logstash, but configuration is not working. A curl from the server is working but logs arent going through logstaash. curl -k "https://splunk-hec.test.com:443/services/collector/raw?" \ -H "Authorization: Splunk XXXX" \ -d '{"event": "Hello!", "sourceType": "Test"}' Logstash output config http { http_method => "post" url => "https://splunk-hec.test.com:443/services/collector/event/1.0" headers => ['Authorization', 'Splunk XXXXX'] mapping => { "sourcetype" => "logstash" } } Error [HTTP Output Failure] Could not fetch URL {:url=>"https//splunk-hec.test.com:443/services/collector/event/1.0", :method=>:post, :body=>"{\"sourcetype\":\"logstash\"}", :headers=>{"Authorization"=>"Splunk XXX", :message=>"connect timed out",
The search below looks up a serial number in another index, there will be multiple values to "x", but currently it only returns 1. How do I get it to return all of the values? Also, 2nd question... See more...
The search below looks up a serial number in another index, there will be multiple values to "x", but currently it only returns 1. How do I get it to return all of the values? Also, 2nd question, as it's only returning 1 value, how does it choose which value to return? index = email serialnumber=123456789 | join serialnumber type=left [ search index=db | dedup Y | rename serial AS serialnumber ] | table serialnumber X
It's similar to Windows TA not Parsing "Error_Code" from 4776 Logs My take on that is - The TA does the following - if a field by the name Status (Windows field) exists, its value is being co... See more...
It's similar to Windows TA not Parsing "Error_Code" from 4776 Logs My take on that is - The TA does the following - if a field by the name Status (Windows field) exists, its value is being copied to a new field called Error_Code (Splunk field). If Status has no value, Error_Code would have a dash (-). So, it's a field alias. Now, if Error_Code existed already as a Windows field, then Error_Code would be overridden by the value of the Status field or a dash. So, we end up losing lots of data.
I have a table that shows instances of errors from the event log over time by host. I use a drop down that searches the event log data for Type="Error" | top limit=20 Message to populate $Erro... See more...
I have a table that shows instances of errors from the event log over time by host. I use a drop down that searches the event log data for Type="Error" | top limit=20 Message to populate $ErrorMessage$ with the value in the Message column. Then I have a table that uses $ErrorMessage$ and does this search: Type="Error" Message ="$ErrorMessage$" | eval host=upper(host) | timechart count by host The table and the drop down both default to 24 hour periods. It works, except when the Message contains reserved characters, like [ or ]. Then I don't get any matches, even though results show in the drop down. Do I need to escape characters in $ErrorMessage$ when I do my search for the timechart? If so, how do I do that without knowing what characters will show up or how many?
HI all, Need help in getting below code adjust to get the value as expected. index=nw_syslog "DDOS_PROTOCOL_VIOLATION_SET" AND ( "*USDAL*" OR "*USEMC*" OR "*NLACO*" OR "*SGPNH*" OR "*USHCO*" O... See more...
HI all, Need help in getting below code adjust to get the value as expected. index=nw_syslog "DDOS_PROTOCOL_VIOLATION_SET" AND ( "*USDAL*" OR "*USEMC*" OR "*NLACO*" OR "*SGPNH*" OR "*USHCO*" OR "*INMCO*" OR "*CACCO*" OR "*CATRC*" OR "*GBLHD*") ARP | stats latest(_time) as Time_CST count by hostname | sort - Time_CST | fieldformat Time_CST=strftime(Time_CST,"%x %X") Current Output hostname Time_CST count USEMCPOD07-DCNPS3003 02/28/20 06:41:37 3 USEMCPOD07-DCNPS3001 02/28/20 06:41:36 3 USEMCPOD07-DCNPS3002 02/28/20 06:41:36 3 USEMCPOD07-DCNPS3004 02/28/20 06:41:36 2 Expected output.: minus the second. hostname Time_CST count USEMCPOD07-DCNPS3003 02/28/20 06:41 3 USEMCPOD07-DCNPS3001 02/28/20 06:41 3 USEMCPOD07-DCNPS3002 02/28/20 06:41 3 USEMCPOD07-DCNPS3004 02/28/20 06:41 2
Hello, I'm currently running the Splunk App for AWS and am receiving the data without a problem into its own index in our Splunk environment. I've configured Splunk ES CIM data models to look at o... See more...
Hello, I'm currently running the Splunk App for AWS and am receiving the data without a problem into its own index in our Splunk environment. I've configured Splunk ES CIM data models to look at our custom AWS index. My issue is that we have a correlation search (Short-lived Account Detected) turned on that is not working with our AWS logs properly. This correlation search came out of the box with Splunk Enterprise Security and works with our Windows Logs without an issue. The problem is that this correlation search is supposed to show the account that was created and deleted within a short amount of time. But when it comes to our AWS logs, the correlation search generates a notable event that displays the wrong account name. It shows the account that performed the creation and deletion of the short-lived account and we do not want that. (Example shown in image below.) I've went into the back-end (Linux CLI) to create a props.conf with a field-alias statement in the Splunk_TA_aws local configuration on our Splunk ES server, but the field alias didn't map the "requestParameters.userName" field to the "user" field like I thought it would. I thought that if I had mapped the interesting field to the field that the data model looked for, then it would show up in the notable event. This is the props.conf field alias that I created in the Splunk_TA_aws local configuration on our Splunk ES server: [aws:cloudtrail] FIELDALIAS-requestParameters.userName-for-aws-cloudtrail = requestParameters.userName AS user I'm I going about solving this issue the right way? If not, is there a better way to fix this? Thanks, Grant
Hello, I just want to parse a log file. I try every solution found on forum but never work. (Splunk 7.3.3) Log: <event action=....> <start>2020-02-22 12:49:21:596</start> <client .../>... See more...
Hello, I just want to parse a log file. I try every solution found on forum but never work. (Splunk 7.3.3) Log: <event action=....> <start>2020-02-22 12:49:21:596</start> <client .../> <sent ...></sent> </event> <event action=...> <start>2020-02-22 12:49:20:435</start> <client .../> <sent ...></sent> </event> What i want on Splunk SH: _time _raw 2020-02-22 13:49:21 <event action=...> <start>2020-02-22 12:49:21:596</start> <client .../> <sent ...></sent> </event> 2020-02-22 13:45:20 <event action=...> <start>2020-02-22 12:49:21:596</start> <client .../> <sent ...></sent> </event> What i have on Splunk SH: _time _raw 2020-02-22 13:49:21 <event action=...> 2020-02-22 13:45:20 <event action=...> inputs.conf on UF: index = test sourcetype = my_sourcetype disabled = 0 4 props.conf tried on Indexer (based on forum's solution): [my_sourcetype] LINE_BREAKER = ([\r\n]+)\<event SHOULD_LINEMERGE = false DATETIME_CONFIG = NONE [my_sourcetype] SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE = ^\s*\<event [my_sourcetype] SHOULD_LINEMERGE = true MUST_BREAK_AFTER = \</event\> [my_sourcetype] DATETIME_CONFIG = CURRENT KV_MODE = xml LINE_BREAKER = ([\r\n]+)(?=\s*\<event.*?\>) BREAK_ONLY_BEFORE_DATE = False MUST_BREAK_AFTER = \</event\> NO_BINARY_CHECK = 1 SHOULD_LINEMERGE = false TRUNCATE = 0 props.conf on Search Head: [my_sourcetype] KV_MODE = xml After each change, i reboot splunk on Idx and SH Thanks for help Regards
After upgrading to v8.0.1 we noticed that many of our long-running scheduled searches are ending up in a "Finalized" state, instead of a "Done" state. We also suspect that our results are now incomp... See more...
After upgrading to v8.0.1 we noticed that many of our long-running scheduled searches are ending up in a "Finalized" state, instead of a "Done" state. We also suspect that our results are now incomplete. What is happening?
Hi, I have a series of log entries that are in the form #4 MyApp\Framework\DB\Adapter\Pdo\Mysql->_query('SELECT `store_we...', array()) called at [vendor/myapp/framework/DB/Adapter/Pdo/Mysql.p... See more...
Hi, I have a series of log entries that are in the form #4 MyApp\Framework\DB\Adapter\Pdo\Mysql->_query('SELECT `store_we...', array()) called at [vendor/myapp/framework/DB/Adapter/Pdo/Mysql.php:621] #5 MyApp\Framework\DB\Adapter\Pdo\Mysql->query(MyApp\Framework\DB\Select#b8e969b2c2d6#, array()) called at [vendor/myapp/zendframework1/library/Zend/Db/Adapter/Abstract.php:737] #6 Zend_Db_Adapter_Abstract->fetchAll(MyApp\Framework\DB\Select#b8e969b2c2d6#) called at [vendor/myapp/module-store/App/Config/Source/RuntimeConfigSource.php:87] where the bits between # and # are the only differences between them. I wanted to use sed to replace the bits between # and # with a common string so when I do a stats on them they all appear the same and I can get a nice count of how often the error occurred. I've tried using rex field=Message mode=sed "s/(Select\#[^\#]*\#)/Select/g" What would I need to do to replace the portions between the # and # (in this case #b8e969b2c2d6# Thanks, Kind regards, Ian
I have two query 1: sourcetype=A error=499 2: sourcetype=B X=* I would like to make timechart of % of A on B. Basically I want to make timechart that will tell if error code increase is be... See more...
I have two query 1: sourcetype=A error=499 2: sourcetype=B X=* I would like to make timechart of % of A on B. Basically I want to make timechart that will tell if error code increase is because of volume decrease etc,
hi UI gurus, we have a simple requirement to display certain links in a dashboard. All is good until there is invalid (un-encoded) characters involved. then if I use [[CDATA]] then Splunk simple ... See more...
hi UI gurus, we have a simple requirement to display certain links in a dashboard. All is good until there is invalid (un-encoded) characters involved. then if I use [[CDATA]] then Splunk simple XML takes the values as literal Below is dashboard we want to achieve <dashboard> <label>test HREF CDATA</label> <row> <html> <h1> HREF test with un-encoded characters </h1> <li>Lookup Links <ul> <li>My Lookup => <a href="../lookup_editor/owner=nobody&namespace=search&lookup=xyz.csv&type=csv">Link</a> </li> </ul> </li> </html> </row> </dashboard> Since the href is "un-encoded", this complains. When we change the <a> link as below <![CDATA[<a href="../lookup_editor/owner=nobody&namespace=search&lookup=xyz.csv&type=csv">Link</a>]]> Then splunk XML does not render href properly. What's the best way to use "href" links properly in Simple XML? PS: Even <link> don't like & in the URL as below also have same problem <link> ../lookup_editor/owner=nobody&namespace=search&lookup=xyz.csv&type=csv </link> xx
Hello, I am new to Splunk so apologies if this question seems overly simple. Currently I have a search where in the query I list off the different sources, e.g. index=my_index host=my_host... See more...
Hello, I am new to Splunk so apologies if this question seems overly simple. Currently I have a search where in the query I list off the different sources, e.g. index=my_index host=my_host (source=".../component_1.log" OR source=".../component_2.log" OR ... etc) "keyword" However, requirements have changed and I now need to store that list of sources in a lookup file, which looks like this source, ".../component_1.log" ".../component_2.log" ... ".../component_n.log" Can I take the values stored in the lookup file and use them as a the source value in a subsequent search? It seems like something very easy but I just can't seem to get it right. I have added the lookup correctly to my splunk environment and can see its contents okay. |inputlookup my_lookup.csv I just can't seem to combine the two elements, am I missing something basic? |inputlookup my_lookup.csv | rename source as lookup_source | fields lookup_source | search index=my_index host=my_host source=lookup_source "keyword" Thanks.
I have HEC messages that are indexed with the sourcetype _json . This is a build in Splunk source obviously and has the following configuration: [_json] pulldown_type = true INDEXED_EXTRACTIONS ... See more...
I have HEC messages that are indexed with the sourcetype _json . This is a build in Splunk source obviously and has the following configuration: [_json] pulldown_type = true INDEXED_EXTRACTIONS = json KV_MODE = none category = Structured description = JavaScript Object Notation format. For more information, visit http://json.org/ I have a problem however with the length of the indexed fields, they are truncated to 1000 characters. I can't seem to figure out what field I should set to increase that limit. To give a bit more context, the HEC messages that I receive are roughly structured as follows: { "id": "35298092067921924966859073695563957796481621929900441603", "level": "INFO", "message": "2020-02-27T16:33:10.666Z e18c650c-7d2d-4acc-bf9c-bfbb1fd0cec4 INFO {\"message\":\"Error while ... \"}" } So we actually have an extract field called message (and id and level ) etc, but that field can be rather long and is truncated at 1000 characters. I've try to find this in the limits.conf documentation, but I cannot find a definitive value there. Can somebody help me out?
Hi at all, I've just upgraded Splunk Enterprise from 7.1.1 to 8.0.2, Enterprise Security from 5.2.0 to 6.1.0. and all the related apps and TAs on a Search Head. All the upgrade is ok but I have t... See more...
Hi at all, I've just upgraded Splunk Enterprise from 7.1.1 to 8.0.2, Enterprise Security from 5.2.0 to 6.1.0. and all the related apps and TAs on a Search Head. All the upgrade is ok but I have this warning: Health Check: One or more apps ("TA-json-alerting") that had previously been imported are not exporting configurations globally to system. Configuration objects not exported to system will be unavailable in Enterprise Security. Where TA-json-alerting is an app that I cannot find in baseline so I wasn't able to upgrade. At first, is it a problem or not? Then, how can I solve it? Ciao and thanks. Giuseppe