All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Wow, it worked..    I will accept this as solution.   Thank you so much What did the "eval if" part do? if score > 0, then include the vuln, if not assign null function, which means DC will ignore ... See more...
Wow, it worked..    I will accept this as solution.   Thank you so much What did the "eval if" part do? if score > 0, then include the vuln, if not assign null function, which means DC will ignore it? eval(if(score > 0,vuln,null()))  
As I noted in https://community.splunk.com/t5/Splunk-Search/Date-time-formatting-variables-not-producing-result-I-expected/m-p/666477#M228639, the letter "Z" signifies a standard time zone and you sh... See more...
As I noted in https://community.splunk.com/t5/Splunk-Search/Date-time-formatting-variables-not-producing-result-I-expected/m-p/666477#M228639, the letter "Z" signifies a standard time zone and you should NOT simply remove it.  Instead, Splunk should process it as a timezone token before you render the end result in any string format you wanted.  In other words, | eval stime=strftime(strptime(stime,"%FT%T%Z"),"%F %T") | eval etime=strftime(strptime(etime,"%FT%T%Z"),"%F %T") | eval orgstime=strftime(strptime(orgstime,"%FT%T%Z"),"%F %T") | eval orgetime=strftime(strptime(orgetime,"%FT%T%Z"),"%F %T")  
So, your formula includes min_score as base, and sets "threshold" at 2/3 between min and max.  In this case, if your data has no range between min and max, this formula will give you the same number ... See more...
So, your formula includes min_score as base, and sets "threshold" at 2/3 between min and max.  In this case, if your data has no range between min and max, this formula will give you the same number as min==max.  Only people with intimate knowledge about that data and this particular use case can determine what the best alternative formula could be. Say, for example, if you decide that instead of min_score + 2/3 * range for all, you want to use the existing formula when range is, say greater than 1/10 of min_score, but use 4/5 * max_score if range is too narrow, you could just express this in SPL. index=ss group="Threat Intelligence" ``` here I'm grouping the domain names in to single group by there naming convention``` | eval domain_group=case( like(domain_name, "%cisco%"), "cisco", like(domain_name, "%wipro%"), "wipro", like(domain_name, "%IBM%"), "IBM", true(), "other" ) | stats count as hits, min(attacker_score) as min_score, max(attacker_score) as max_score by domain_group, attackerip | sort -hits | eval range = max_score - min_score | eval threshold =round(if(range > min_score / 10), min_score + (2 * (range/3)), max_score * 4 / 5), 0) | eventstats max(hits) as max_hits by domain_group ``` eventstats instead of streamstats ``` | where hits >= threshold ``` threshold is used in place of max_hits ``` | table domain_group, min_score, max_score, attackerip, hits, threshold | dedup domain_group This said, I notice the streamstats and dedup in your code, and the criterion hits >= max_hits.  Maybe you have a different use case in mind? threshold is not used at all.  Why calculate it?  The condition hits >= max combined with streamstats (as opposed to eventstats as I illustrated above) will result in alerts for every IP that has larger hits than all previous ones (instead of the largest one, or ones that exceed calculated threshold) - is this what you wanted? your table retains attackerip, but dedup domain_group will lose all except the highest in the group. Maybe your use case is simpler, that you want every domain group to alert, but alert only on the IP address with largest hits? This use case is still very unclear.
Hi @ITWhisperer  Really appreciate your patience and supporting me, Here the results are 'end time is not populating for most of the events, Need only event contain both start and end time stamp. ... See more...
Hi @ITWhisperer  Really appreciate your patience and supporting me, Here the results are 'end time is not populating for most of the events, Need only event contain both start and end time stamp.  
I have a query to display following 3 fields  | table pp_user_action_name,Today_Calls,Avg_today i want to replace 'Avg_today' column header with today's date like '11/1/2023'  is it possible? 
| eval start_time=if(processing_stage="Obtained data",invocation_timestamp,null()) | eval end_time=if(processing_stage="Successfully obtained genesys response",invocation_timestamp,null()) | stats va... See more...
| eval start_time=if(processing_stage="Obtained data",invocation_timestamp,null()) | eval end_time=if(processing_stage="Successfully obtained genesys response",invocation_timestamp,null()) | stats values(start_time) as start_time values(end_time) as end_time by correlation_id | eval difference=strptime(end_time,"%FT%TZ")-strptime(start_time,"%FT%TZ")
Hello @gabbydm , You can use the following parameters to change the sparkline color from the editor. sparklineColors sparklineAreaColors Note that you can define the cellType to SparklineCell for ... See more...
Hello @gabbydm , You can use the following parameters to change the sparkline color from the editor. sparklineColors sparklineAreaColors Note that you can define the cellType to SparklineCell for such visualization. Reference document - https://docs.splunk.com/Documentation/SplunkCloud/9.0.2305/DashStudio/objOptRef#columnFormat_.28object_type.29   Thanks, Tejas.   --- If the above solution helps you, an upvote is appreciated.
Hi @ITWhisperer  "invocation_timestamp": "2023-11-01T11:33:41Z" "processing_stage": "Obtained data">>> Start time "processing_stage": "Successfully obtained incontact response" >>> End time
From these events, where exactly do the timestamps come from?
Hi @ITWhisperer  Correlation ID Event start time Event end time Difference  0cd56112-6346-4ea3-8a2f-2b59b9eb68ba 11-01-2023 17:03:41:321 11-01-2023 17:04:04:300 22.979 {"mess... See more...
Hi @ITWhisperer  Correlation ID Event start time Event end time Difference  0cd56112-6346-4ea3-8a2f-2b59b9eb68ba 11-01-2023 17:03:41:321 11-01-2023 17:04:04:300 22.979 {"message_type": "INFO", "processing_stage": "Obtained data", "message": "Successfully received data from API/SQS", "correlation_id": "0cd56112-6346-4ea3-8a2f-2b59b9eb68ba", "error": "", "invoked_component": "prd-start-step-function-from-lambda-v1", "request_payload": "", "response_details": "{'executionArn': 'arn:aws:states:eu-central-1:981503094308:execution:contact-centre-dialer-service:8a1acb14-b170-4f95-99bc-7a89ff814207', 'startDate': datetime.datetime(2023, 11, 1, 11, 33, 41, 354000, tzinfo=tzlocal()), 'ResponseMetadata': {'RequestId': '60427a29-6dd4-4cdf-b5c0-fc6cb45b08b2', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': '60427a29-6dd4-4cdf-b5c0-fc6cb45b08b2', 'date': 'Wed, 01 Nov 2023 11:33:41 GMT', 'content-type': 'application/x-amz-json-1.0', 'content-length': '165', 'connection': 'keep-alive'}, 'RetryAttempts': 0}}", "invocation_timestamp": "2023-11-01T11:33:41Z", "response_timestamp": "2023-11-01T11:33:41Z", "custom_attributes": {"entity-internal-id": "", "root-entity-id": "", "student_id": "64690945", "lead-id": "37079165", "country": "Nepal"}} {"message_type": "INFO", "processing_stage": "Successfully obtained genesys response", "message": "Successfully obtained genesys response", "correlation_id": "0cd56112-6346-4ea3-8a2f-2b59b9eb68ba", "error": "", "invoker_agent": "arn:aws:sqs:eu-central-1:981503094308:prd-ccm-genesys-ingestor-queue-v1", "invoked_component": "prd-ccm-genesys-ingestor-v1", "request_payload": "", "response_details": "", "invocation_timestamp": "2023-11-01T11:34:04Z", "response_timestamp": "2023-11-01T11:34:04Z", "original_source_app": "YMKT", "target_idp_application": "", "retry_attempt": "1", "custom_attributes": {"entity-internal-id": "", "root-entity-id": "", "campaign-id": "4e749ade-ac9c-45e0-94fe-9ae21e1398d8", "campaign-name": "", "marketing-area": "IDP_NPL", "lead-id": "37079165", "record_count": "", "country": "Nepal"}}
| eval stime=strftime(strptime(stime,"%FT%TZ"),"%F %T") | eval etime=strftime(strptime(etime,"%FT%TZ"),"%F %T") | eval orgstime=strftime(strptime(orgstime,"%FT%TZ"),"%F %T") | eval orgetime=strftime(... See more...
| eval stime=strftime(strptime(stime,"%FT%TZ"),"%F %T") | eval etime=strftime(strptime(etime,"%FT%TZ"),"%F %T") | eval orgstime=strftime(strptime(orgstime,"%FT%TZ"),"%F %T") | eval orgetime=strftime(strptime(orgetime,"%FT%TZ"),"%F %T")
@phanTom  you are a genius!  thank you very much
I had to look through the search job logs where I noticed there were some errors regarding a lookup that didn't exist in that SH but was being used by the SH running the DM acceleration. I added said... See more...
I had to look through the search job logs where I noticed there were some errors regarding a lookup that didn't exist in that SH but was being used by the SH running the DM acceleration. I added said lookup and fields to all SHs where I was sharing DMA summaries and the error went away. I'd start by reviewing search job logs and then going over your affected DM(s) to see if there are any lookups being used to populate any fields.
This is the final stats results I got it now. The query you have shared is used to modify specific time. But I like to modify the timestamp on all the below mentioned column.   
Hi @Mafokognel, Thanks for your answer. I know this, bat my question is:  after LDAP integration, I see groups containing users, but I don't see Groups without users. Do you think that's normal o... See more...
Hi @Mafokognel, Thanks for your answer. I know this, bat my question is:  after LDAP integration, I see groups containing users, but I don't see Groups without users. Do you think that's normal or there could be an issue? Ciao. Giuseppe
| makeresults | eval time="2023-11-01T15:54:00Z" | eval reformatted=strftime(strptime(time,"%FT%TZ"),"%F %T")
I am trying to remove T and Z from the output timestamp results. Can you please help me with the query to remove  and space in the place of T and Z. 2023-11-01T15:54:00Z
@PickleRick  It work great thanks, i have another key value that call T[001] means “Type” on each line.  in last line need to add it, to show in result, so try 1-to add to last stats but it return... See more...
@PickleRick  It work great thanks, i have another key value that call T[001] means “Type” on each line.  in last line need to add it, to show in result, so try 1-to add to last stats but it returns nothing for T (because remove in first stats) 2-try to add after “by” in first stats not work, 3-use evenstream but it will count all lines that contain module, while it should return 1  <your_search> | stats list(module) as modules by transactionID | eval modules=mvjoin(modules," ") | stats count by modules Any idea? 
Hello,  For my knowledge, You have to create role, after assign to the role their permission. thereafter you can map the group and authenticate again. Then Go to user and check username assign to th... See more...
Hello,  For my knowledge, You have to create role, after assign to the role their permission. thereafter you can map the group and authenticate again. Then Go to user and check username assign to the group. Thanks
You could try using a token to define the series colours. You could set the token in the done handler of the search used for the pie chart such that the right colours are used based on the values pre... See more...
You could try using a token to define the series colours. You could set the token in the done handler of the search used for the pie chart such that the right colours are used based on the values present in the results.