All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Please can you give an example of your expected results?
|mstats sum(faliure.count) as Failed where index=metric-logs by service application_codes |lookup app.csv application_codes when i run the above query i am getting application_codes from mstats que... See more...
|mstats sum(faliure.count) as Failed where index=metric-logs by service application_codes |lookup app.csv application_codes when i run the above query i am getting application_codes from mstats query not from csv file
Try lookup of application_codes in csv and then filter by type
|mstats sum(faliure.count) as Failed where index=metric-logs by service application_codes Form the above query i am getting the results of service and application_codes. But my requirement is to ge... See more...
|mstats sum(faliure.count) as Failed where index=metric-logs by service application_codes Form the above query i am getting the results of service and application_codes. But my requirement is to get the application_codes from a csv file and  from only type=error1 below is the csv file application_codes Description Type 0 error descp 1 error1 10 error descp 2 error2 10870 error descp 3 error3 1206 error descp 1 error1 11 error descp 3 error3 17 error descp 2 error2 18 error descp 1 error1 14 error descp 2 error2 1729 error descp 1 error1    
Hi all, I have installed and configured  fortiweb for splunk app. The problem is that the time in the log is correct, but the time I receive in the Splunk time column is 7 hours different. It should... See more...
Hi all, I have installed and configured  fortiweb for splunk app. The problem is that the time in the log is correct, but the time I receive in the Splunk time column is 7 hours different. It should be mentioned that there is a field in the logs called timezone_dayst that it differs from my time zone by exactly 7 hours. I also added TZ = MyTimeZone to the props.conf of the app but problem still exists. For example, in the image below, it can be seen that the time is equal to 8:37, while the log time is equal to 1:07, and of course timezone_dayst has a drift (-3:30 instead of +3:30).    Any ideas are appreciated.
I recommend using the "where" command: index=indexname sourcetype=eventname | where result1 > 5 (note this assumes that result1 is already an extracted field. If not, try this: index=in... See more...
I recommend using the "where" command: index=indexname sourcetype=eventname | where result1 > 5 (note this assumes that result1 is already an extracted field. If not, try this: index=indexname sourcetype=eventname | rex field=_raw "result1=(?<result1>\d*)" | where result1 > 5
Thanks @tscroggins i did post a new question.. How to filter a field from the log where the values change for example please see below, logfile =(result1=0 result2=5 result3=10 result4=14)  at 5AM ... See more...
Thanks @tscroggins i did post a new question.. How to filter a field from the log where the values change for example please see below, logfile =(result1=0 result2=5 result3=10 result4=14)  at 5AM logfile =(result1=8 result2=5 result3=10 result4=14) at 5:10Am logfile =(result1=4 result2=5 result3=10 result4=14) at 5:20Am logfile =(result1=3 result2=5 result3=10 result4=14) at 5:30Am i want query to return result and show when result1 is greater than 5, please help Current state im at =index=indexname | search sourcetype=eventname "result1=5" gives results but if i do index=indexname | search sourcetype=eventname "result1> 4" returns nothing
Hi @ZimmermanC1, You can send a private message to the author, @mikaelbje, from their profile page.
Hi @dataisbeautiful, It appears the time picker input ignores the locale and defaults to en_US. Have you contacted support? They can report this internally as a bug.
Hi @Rajpranar, This is a lovely thread, but it's 14 years old. Asking a new, unanswered question will help you get an answer more quickly. You can use the greater than operator in field expressions... See more...
Hi @Rajpranar, This is a lovely thread, but it's 14 years old. Asking a new, unanswered question will help you get an answer more quickly. You can use the greater than operator in field expressions: field>1 See https://docs.splunk.com/Documentation/Splunk/latest/Search/Fieldexpressions. If you need to compare the value of two fields, use the where command: | where field2>field1
How to filter a field from the log where the values change for example please see below, logfile =(result1=0 result2=5 result3=10 result4=14)  at 5AM logfile =(result1=8 result2=5 result3=10 result... See more...
How to filter a field from the log where the values change for example please see below, logfile =(result1=0 result2=5 result3=10 result4=14)  at 5AM logfile =(result1=8 result2=5 result3=10 result4=14) at 5:10Am logfile =(result1=4 result2=5 result3=10 result4=14) at 5:20Am logfile =(result1=3 result2=5 result3=10 result4=14) at 5:30Am i want query to return result and show when result1 is greater than 5, please help Current state im at =index=indexname | search sourcetype=eventname "result1=5" gives results but if i do index=indexname | search sourcetype=eventname "result1> 5" returns nothing  
if the fileds has values like filed=0, field=1 etc.. how can i filter this filed which has values greater than 1
Hi @AL3Z, Have you checked https://research.splunk.com/endpoint/a4e8f3a4-48b2-11ec-bcfc-3e22fbd008af/ and the other content available from https://research.splunk.com/?
Hi @alec_stan, You can extract the timestamp using INGEST_EVAL in transforms.conf referenced by a TRANSFORMS setting in props.conf. If your source type has INDEXED_EXTRACTIONS = json, you can refer... See more...
Hi @alec_stan, You can extract the timestamp using INGEST_EVAL in transforms.conf referenced by a TRANSFORMS setting in props.conf. If your source type has INDEXED_EXTRACTIONS = json, you can reference the Date and Time fields directly in your INGEST_EVAL expression; otherwise, you can use JSON eval functions to extract the Date and Time values from _raw. ### with INDEXED_EXTRACTIONS # props.conf [alec_stan_json] INDEXED_EXTRACTIONS = json TRANSFORMS-alec_stan_json_time = alec_stan_json_time # transforms.conf [alec_stan_json_time] INGEST_EVAL = _time:=strptime(tostring(Date).tostring(Time), "%y%m%d%H%M%S") ### without INDEXED_EXTRACTIONS # props.conf [alec_stan_json] TRANSFORMS-alec_stan_json_time = alec_stan_json_time # transforms.conf [alec_stan_json_time] INGEST_EVAL = _time:=strptime(tostring(json_extract(json(_raw), "Date")).tostring(json_extract(json(_raw), "Time")), "%y%m%d%H%M%S") If the event time zone differs from the receiver time zone, add a time zone string (%Z) or offset (%z) to the eval expression: [alec_stan_json_time] INGEST_EVAL = _time:=strptime(tostring(Date).tostring(Time)."EDT", "%y%m%d%H%M%S%Z") In a typical environment, deploy props.conf to universal forwarders and props.conf and transforms.conf to receivers (heavy forwarders and indexers). If you haven't already, you should add SHOULD_LINEMERGE, LINE_BREAKER, etc. settings to props.conf to correctly break your input into events. You can also set DATETIME_CONFIG = CURRENT or DATETIME_CONFIG = NONE to help Splunk skip automatic timestamp extraction logic since you'll be extracting the timestamp using INGEST_EVAL.  
| spath _embedded.metadata.data.results{}.notifications output=results | mvexpand results | rex field=results "\"(?<ErrorCode>\d+)\":\s*\"(?<ErrorMessage>[^\"]*)\"" | stats count by ErrorCode ErrorMe... See more...
| spath _embedded.metadata.data.results{}.notifications output=results | mvexpand results | rex field=results "\"(?<ErrorCode>\d+)\":\s*\"(?<ErrorMessage>[^\"]*)\"" | stats count by ErrorCode ErrorMessage
When the servers reboot do they reload the Golden Image?  If so, they will not retain their GUID.
Hi, I am having trouble generating a stats report based on JSON data containing an array.  I want to produce the following report: ErrorCode ErrorMessage Count 212 The image quality is poo... See more...
Hi, I am having trouble generating a stats report based on JSON data containing an array.  I want to produce the following report: ErrorCode ErrorMessage Count 212 The image quality is poor 1 680 The image could not be found 1 809 Document not detected 1       When I do the stats command, I do not get any results:   | spath input=jsondata |stats count by "embedded.metadata.data.results{}.notifications.*"   I have to know the error code value in the array in order to get any stats output.  For example:   | spath input=jsondata |stats count by "embedded.metadata.data.results{}.notifications.809"   Result: embedded.metadata.data.results{}.notifications.809 count Document not detected 1   Here example of the json data { "_embedded": {   "metadata": {      "environment": {          "id": "6b3dc"       }, "data": {    "results": [     {      "documentId": "f18a20f1",      "notifications": {          "212": "The image quality was poor"        }    }, {    "documentId": "f0fdf5e8c",    "notifications": {       "680": "The image could not be found"      } }, {    "documentId": "95619532",    "notifications": {       "809": "Document not detected"     } } ] } } } } Thanks in advance for any assistance!!
Hi, Could if anyone pls share the dashboard spl for the lateral movement in this YouTube video. https://youtu.be/bCCf9q2B4BM?si=P7FoduAwS--Hkgbw Thanks 
@emallinger Do you remember which specific parameters were missing? Running into the same problem while using IBM COS as S3 SmartStore.
Having more accurate representation of data is definitely an improvement.  But you still need to answer the questions about data characteristics  (lots of possibilities regarding triplets P_BATCH_ID,... See more...
Having more accurate representation of data is definitely an improvement.  But you still need to answer the questions about data characteristics  (lots of possibilities regarding triplets P_BATCH_ID, P_REQUEST_ID, and P_RETURN_STATUS), and those about desired results (why multiple columns with ordered lists, should there be dedup of triplets, order of triplets, etc.)  Because each combination requires a different solution, and can give you very different results.  Other people's mind-reading is more wrong than correct. Let me try two mind-readings to illustrate. First, you want to preserve every triplet even if they repeat, and you want to present them in the order of event arrival as well as in the order they appear inside each event, except being grouped by correlationId.  Absolutely no dedup. (Although this looks to have the least commands, the "solution" is the most demanding in memory.)   | rename content."List of Batches Processed"{}.* as * | fields P_BATCH_ID P_REQUEST_ID P_RETURN_STATUS correlationId | stats list(P_*) as * by correlationId ``` mind-reading #1 ```   Using a composite emulation based on samples you provided (see end of this post), you will get correlationId BATCH_ID REQUEST_ID RETURN_STATUS 490cfba0e9f3c770b40 1 2 3 4 1 2 3 4 5 6 177 1r7 1577 16577 1005377 1005177 1005377 1005377 1005377 100532177 SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS Does  this look like something you need? Or, mind-reading 2. You don't want any duplicate triplet; neither the order these triplets arrive with events nor the order they appear in individual events matters.  You want maximum dedup, just group by correlationId.   | spath path=content."List of Batches Processed"{} | mvexpand content."List of Batches Processed"{} | spath input=content."List of Batches Processed"{} | stats count by P_BATCH_ID P_REQUEST_ID P_RETURN_STATUS correlationId | stats list(P_*) as * by correlationId ``` mind-reading #2 ```   The same emulation will give correlationId BATCH_ID REQUEST_ID RETURN_STATUS 490cfba0e9f3c770b40 1 1 2 2 3 3 4 4 5 6 1005377 177 1005177 1r7 1005377 1577 1005377 16577 1005377 100532177 SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS Note: Even though this mock result is only superficially different from the previous one, the two can be materially very different if real data contain lots of duplicate triplets. As a bonus, I want to throw in a third mind-reading: You don't care about triplets at all.  You only want to know which values are present in each of P_BATCH_ID, P_REQUEST_ID, and P_RETURN_STATUS. (This one is the least demanding in memory, and computationally light.)   | rename content."List of Batches Processed"{}.* as * | fields P_BATCH_ID P_REQUEST_ID P_RETURN_STATUS correlationId | stats values(P_*) as * by correlationId ``` mind-reading extremo ```   The emulated data will give correlationId BATCH_ID REQUEST_ID RETURN_STATUS 490cfba0e9f3c770b40 1 2 3 4 5 6 1005177 100532177 1005377 1577 16577 177 1r7 SUCCESS Is this closer to what you want? The above three very different results are derived from assuming that the two sample/mock JSON data contain identical correlationId, like emulated below.  They all kind of fit into the mock result table you showed.  How can volunteers tell?   | makeresults | fields - _* | eval data = mvappend("{ \"correlationId\" : \"490cfba0e9f3c770b40\", \"content\" : { \"List of Batches Processed\" : [ { \"P_REQUEST_ID\" : \"177\", \"P_BATCH_ID\" : \"1\", \"P_TEMPLATE\" : \"Template\", \"P_PERIOD\" : \"24\", \"P_MORE_BATCHES_EXISTS\" : \"Y\", \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\", \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\", \"P_RETURN_STATUS\" : \"SUCCESS\" }, { \"P_REQUEST_ID\" : \"1r7\", \"P_BATCH_ID\" : \"2\", \"P_TEMPLATE\" : \"Template\", \"P_PERIOD\" : \"24\", \"P_MORE_BATCHES_EXISTS\" : \"Y\", \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\", \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\", \"P_RETURN_STATUS\" : \"SUCCESS\" }, { \"P_REQUEST_ID\" : \"1577\", \"P_BATCH_ID\" : \"3\", \"P_TEMPLATE\" : \"Template\", \"P_PERIOD\" : \"24\", \"P_MORE_BATCHES_EXISTS\" : \"Y\", \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\", \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\", \"P_RETURN_STATUS\" : \"SUCCESS\" }, { \"P_REQUEST_ID\" : \"16577\", \"P_BATCH_ID\" : \"4\", \"P_TEMPLATE\" : \"Template\", \"P_PERIOD\" : \"24\", \"P_MORE_BATCHES_EXISTS\" : \"Y\", \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\", \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\", \"P_RETURN_STATUS\" : \"SUCCESS\" }] } }", "{ \"correlationId\" : \"490cfba0e9f3c770b40\", \"message\" : \"Processed all revenueData\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\", \"category\" : \"prc-api\", \"elapsed\" : 472, \"locationInfo\" : { \"lineInFile\" : \"205\", \"component\" : \"json-logger:logger\", \"fileName\" : \"G.xml\", \"rootContainer\" : \"syncFlow\" }, \"timestamp\" : \"2024-03-06T20:57:17.119Z\", \"content\" : { \"List of Batches Processed\" : [ { \"P_REQUEST_ID\" : \"1005377\", \"P_BATCH_ID\" : \"1\", \"P_TEMPLATE\" : \"Template\", \"P_PERIOD\" : \"MAR-24\", \"P_MORE_BATCHES_EXISTS\" : \"Y\", \"P_FILE_NAME\" : \"Template20240306102852.csv\", \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\", \"P_RETURN_STATUS\" : \"SUCCESS\" }, { \"P_REQUEST_ID\" : \"1005177\", \"P_BATCH_ID\" : \"2\", \"P_TEMPLATE\" : \"Template\", \"P_PERIOD\" : \"MAR-24\", \"P_MORE_BATCHES_EXISTS\" : \"Y\", \"P_FILE_NAME\" : \"Template20240306102959.csv\", \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\", \"P_RETURN_STATUS\" : \"SUCCESS\" }, { \"P_REQUEST_ID\" : \"1005377\", \"P_BATCH_ID\" : \"3\", \"P_TEMPLATE\" : \"Template\", \"P_PERIOD\" : \"MAR-24\", \"P_MORE_BATCHES_EXISTS\" : \"Y\", \"P_ZUORA_FILE_NAME\" : \"Template20240306103103.csv\", \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\", \"P_RETURN_STATUS\" : \"SUCCESS\" }, { \"P_REQUEST_ID\" : \"1005377\", \"P_BATCH_ID\" : \"4\", \"P_TEMPLATE\" : \"Template\", \"P_PERIOD\" : \"MAR-24\", \"P_MORE_BATCHES_EXISTS\" : \"Y\", \"P_ZUORA_FILE_NAME\" : \"Template20240306103205.csv\", \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\", \"P_RETURN_STATUS\" : \"SUCCESS\" }, { \"P_REQUEST_ID\" : \"1005377\", \"P_BATCH_ID\" : \"5\", \"P_TEMPLATE\" : \"Template\", \"P_PERIOD\" : \"MAR-24\", \"P_MORE_BATCHES_EXISTS\" : \"Y\", \"P_FILE_NAME\" : \"Template20240306103306.csv\", \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\", \"P_RETURN_STATUS\" : \"SUCCESS\" }, { \"P_REQUEST_ID\" : \"100532177\", \"P_BATCH_ID\" : \"6\", \"P_TEMPLATE\" : \"ATVI_Transaction_Template\", \"P_PERIOD\" : \"MAR-24\", \"P_MORE_BATCHES_EXISTS\" : \"Y\", \"P_ZUORA_FILE_NAME\" : \"rev_ATVI_Transaction_Template20240306103407.csv\", \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\", \"P_RETURN_STATUS\" : \"SUCCESS\" }") | mvexpand data | rename data AS _raw | spath ``` data emulation above ```