All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

As I said earlier, "Please can you show an example of a row you expect in the results table i.e. event id, start time and end time, and the raw events that this information would be extracted from."
Hi @ITWhisperer @yuanliu  We are getting multiple events in each lambda. We need to extract start time and end time of the particular event and also need difference of the start and end time. As I... See more...
Hi @ITWhisperer @yuanliu  We are getting multiple events in each lambda. We need to extract start time and end time of the particular event and also need difference of the start and end time. As I mentioned above image has correlation id , start time , end time and difference. Please let me know if any input we want. Thanks 1 930fd232-8d16-4d1f-8725-a5893e9a46c7 11-01-2023 13:19:06:653 11-01-2023 13:19:23:359 16.706
Hi I want to connect java code with splunk cloud platform can someone suggest me how can I do it.
hello   I have a admin role when I create a field alias, I can see it in the props.conf file but when I run the search the field names are unchanged [sourcetype="Perfmon:mem"] FIELDALIAS-Valu... See more...
hello   I have a admin role when I create a field alias, I can see it in the props.conf file but when I run the search the field names are unchanged [sourcetype="Perfmon:mem"] FIELDALIAS-Value = Value AS titi counter AS tutu   what is wrong please?
Hi @ITWhisperer  Can you help me what you need exactly.
Yeah, this isn't really what I asked for. I sorry that I can't have been more help, but without the relevant information, it is difficult to suggest a way forward.
Hi @LearningGuy , in the base search you must insert all the fields you need in the following panels. Then in each panel, you display only the fields you want for that panel. In your use case: Ba... See more...
Hi @LearningGuy , in the base search you must insert all the fields you need in the following panels. Then in each panel, you display only the fields you want for that panel. In your use case: Base Search: <search id="base"> <query> index=testindex | fields company ip id AvgScore </query> </search> Panel's Search: <search base="base"> <query> | lookup example.csv id OUTPUTNEW id location | table company id ip AvgScore location </query> </search> Additional information: if the key field in the lookup command is the same of the main search you don't need to use "id as id". Ciao. Giuseppe
Hi @inventsekar @yuanliu    To be more precise here is my spl     index=ss group="Threat Intelligence" ``` here I'm grouping the domain names in to single group by there naming convention```... See more...
Hi @inventsekar @yuanliu    To be more precise here is my spl     index=ss group="Threat Intelligence" ``` here I'm grouping the domain names in to single group by there naming convention``` | eval domain_group=case( like(domain_name, "%cisco%"), "cisco", like(domain_name, "%wipro%"), "wipro", like(domain_name, "%IBM%"), "IBM", true(), "other" ) | stats count as hits, min(attacker_score) as min_score, max(attacker_score) as max_score by domain_group, attackerip | sort -hits | eval range = max_score - min_score | eval threshold = round(min_score + (2 * (range/3)), 0) | streamstats max(hits) as max_hits by domain_group | where hits >= max_hits | table domain_group, min_score, max_score, attackerip, hits, threshold | dedup domain_group     o/p: domain_group min_score max_score attackerip hits threshold cisco 510 1635 XXXXXX 2174 1260 other 960 1760 YYYYYY 2173 1493 wipro 1985 1985 ZZZZZZ 2169 1985 IBM 335 1910 PPPPPP 2153 1385   Note: here for the wipro we get to see the same score for both the min and max , we need to fix this !           threshold for the wipro is showing the same as its max_score . Thanks..
HI @ITWhisperer  I have Json extracted fields already.  For your reference I have attached the Sumologic query (( _sourcehost="/aws/lambda/prd-ccm-genesys-ingestor-v1" OR _sourcehost="/aws/lamb... See more...
HI @ITWhisperer  I have Json extracted fields already.  For your reference I have attached the Sumologic query (( _sourcehost="/aws/lambda/prd-ccm-genesys-ingestor-v1" OR _sourcehost="/aws/lambda/prd-start-step-function-from-lambda-v1" OR _sourcehost="/aws/lambda/prd-ccm-incontact-ingestor-v1")) | parse "\"correlation_id\":\"*\"," as event_id nodrop | parse "\"message_type\":\"*\",\"processing_stage\":\"*\"," as type,stage | extract field=event_id "'(?<event_id>[a-zA-Z0-9-]+)" multi nodrop | transaction on event_id with "*Obtained data*" as events_received,with "*Successfully obtained incontact response*" as sent_to_incontact, with "*Successfully obtained genesys response*" as sent_to_genesys | where _others != 1 | ((_end_time - _start_time)/1000) as total_time_to_insert_record_in_contact_centre | formatDate(toLong(_start_time),"MM-dd-yyyy HH:mm:ss:SSS") as events_received$start_time$ | formatDate(toLong(_end_time),"MM-dd-yyyy HH:mm:ss:SSS") as sent_to_contact_centre$end_time$ | fields event_id,events_received$start_time$,sent_to_contact_centre$end_time$,total_time_to_insert_record_in_contact_centre
Question is _why_ do you want to do so? For "security" reasons? That won't fly. The user can press F12 and check what's being sent to and from the server so regardless of whether the data is sent as ... See more...
Question is _why_ do you want to do so? For "security" reasons? That won't fly. The user can press F12 and check what's being sent to and from the server so regardless of whether the data is sent as URL parameters or additional data in request headers - it's still there and can still be observed.
Hi @Roy_9 , could you better descibe your requirement? I understood that you want to trigger an alert if an host didn't send events in tha last 15 minutes, is this correct? I don't understand why ... See more...
Hi @Roy_9 , could you better descibe your requirement? I understood that you want to trigger an alert if an host didn't send events in tha last 15 minutes, is this correct? I don't understand why you take events from a lookup, you should take events from an index. anyway, see my approach and adapt it to your requirement: | tstats count latest(_time) AS latest WHERE index=_internal BY host | where now()-latest>900 | eval latest=strftime(latest,"%Y-%m-%d %H:%M:%S") | table host latest in this way you take all the hosts that didn't send events in the last 15 minutes but sent events in the time period. If an host never sent events in the time range you don't detect it. to complete your Use case, you should create a lookup containing all the hosts to monitor (called e.g. perimeter.csv) containing at least one field (host) and run something like this: | tstats count latest(_time) AS latest WHERE index=_internal BY host | append [ | inputlookup perimeter.csv | count=0 | fields host count ] | stats sum(count) AS total value(latest) AS latest BY host | where now()-latest>900 OR total=0 | eval latest=strftime(latest,"%Y-%m-%d %H:%M:%S"), status=if(total=0,"Never sent","Last event: ".latest) | table host status Ciao. Giuseppe  
While you have shown what you are trying to do, it isn't much clearer. Please can you show an example of a row you expect in the results table i.e. event id, start time and end time, and the raw even... See more...
While you have shown what you are trying to do, it isn't much clearer. Please can you show an example of a row you expect in the results table i.e. event id, start time and end time, and the raw events that this information would be extracted from. Also, you still haven't clarified whether you already have the JSON fields extracted (or whether you need help extracting those as well).
Hi @RSS_STT, sorry I forgor one asterisk, please try this: \"CI\":\s+\"(?<CI_V2>[^;]*);(?<CI_1>[^;\"]*);(?<CI_2>[^;\"]*);*(?<CI_3>[^;\"]*);*(?<CI_4>[^;\"]*);*(?<CI_5>[^;\"]*) that you can test at ... See more...
Hi @RSS_STT, sorry I forgor one asterisk, please try this: \"CI\":\s+\"(?<CI_V2>[^;]*);(?<CI_1>[^;\"]*);(?<CI_2>[^;\"]*);*(?<CI_3>[^;\"]*);*(?<CI_4>[^;\"]*);*(?<CI_5>[^;\"]*) that you can test at https://regex101.com/r/fndJqR/4 Ciao. Giuseppe
Hi @ITWhisperer  We expect this kind of results. We are migrating the code from sumologic to splunk. The below result is comes from sumologic.  Yes you are correct, in this case we have d... See more...
Hi @ITWhisperer  We expect this kind of results. We are migrating the code from sumologic to splunk. The below result is comes from sumologic.  Yes you are correct, in this case we have different correlation id format.  I have shared the sample event here. {"message_type": "INFO", "processing_stage": "Obtained data", "message": "Successfully received data from API/SQS", "correlation_id": "00190cdd-1d12-477f-bcc9-a4e2c3dcfb22", "error": "", "invoked_component": "prd-start-step-function-from-lambda-v1", "request_payload": "", "response_details": "{'executionArn': 'arn:aws:states:eu-central-1:981503094308:execution:contact-centre-dialer-service:1da26863-1645-4961-9992-c450cadf4ebd', 'startDate': datetime.datetime(2023, 11, 1, 9, 6, 36, 152000, tzinfo=tzlocal()), 'ResponseMetadata': {'RequestId': 'aba3f7b2-4d4b-4f53-9a5a-fa2c0d0754fd', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': 'aba3f7b2-4d4b-4f53-9a5a-fa2c0d0754fd', 'date': 'Wed, 01 Nov 2023 09:06:36 GMT', 'content-type': 'application/x-amz-json-1.0', 'content-length': '165', 'connection': 'keep-alive'}, 'RetryAttempts': 0}}", "invocation_timestamp": "2023-11-01T09:06:36Z", "response_timestamp": "2023-11-01T09:06:36Z"}    
I try to create support ticket for Splunk Apps and Add-ons on Cloud Version, and there is a field  that can't be selected  
Hi, I lost the password for my Appdynamics Saas service during the trial period.  While trying to retrieve data via the REST API, I couldn't set up the account and password properly.  I ended up... See more...
Hi, I lost the password for my Appdynamics Saas service during the trial period.  While trying to retrieve data via the REST API, I couldn't set up the account and password properly.  I ended up using a temporary token. Today, when I attempted to access the controller, it prompted me to enter a password. However, even after entering the password, it resulted in a login failure, and the password reset instructions were not sent to my email, even after checking the spam folder. what should I do in this situation? thank you in advance.
Assuming you are talking about passing tokens to dashboards through the URL, this is the way it is done. Alternatives might include, having a separate dashboard for each possible combination of token... See more...
Assuming you are talking about passing tokens to dashboards through the URL, this is the way it is done. Alternatives might include, having a separate dashboard for each possible combination of tokens and calling the relevant dashboard; passing an encoded version of the token which the dashboard then decodes; using a reference to the token which the dashboard then does a lookup to dereference it, etc. All of these options are complex and I have to wonder whether it is worth the effort.
It is not clear what is going on here - you have what looks like JSON although not all of it is correctly formatted; you have different correlation ids; you have different timestamp formats. To make... See more...
It is not clear what is going on here - you have what looks like JSON although not all of it is correctly formatted; you have different correlation ids; you have different timestamp formats. To make things a bit clearer, please share your sample events in a code block </> to preserve the original formatting of the events. Also, please state whether this is actually JSON and whether the fields have already been extracted. Also, please share what your expected output might look like for the shared events, and if it is not obvious from the output, what processing is expected to get the output from the input.
The title is a little confusing.  Based on your description, event timestamps are fine and do not need another "extraction"; your concern is how to obtain the start and end of the tri-event group.  I... See more...
The title is a little confusing.  Based on your description, event timestamps are fine and do not need another "extraction"; your concern is how to obtain the start and end of the tri-event group.  Is this correct? As you observed, there is some limitation in transaction command.  startswith-endswith works best with a clear starting and ending.  Do you mean to say that "Successfully obtained incontact response" and "Successfully obtained genesys response" could appear in arbitrary orders?  Do you mean to say that for each correlation_id, there are only these three events? (Note that the three examples do not all have the same correlation_id.)  Before exploring other options in transaction command, know that it is expensive and is best avoided unless your application has special needs for it. To obtain start time and end time of the group, and to preserve meaningful field values, you can use min, max, and list functions with stats command.  For example, | stats min(_time) as _time max(_time) as group_end list(*) as * by correlation_id Using list on all fields can also be expensive.  So you may want to select just those that matter to your use case. In the end, there are many ways to fulfill a use case.  But particulars in the use case determine how to best get the results.
Yes, I understand. It's just that we're trying to find out why you don't have the "Once" option available. It is strange since I don't recall any capabilities limiting your choice here.   EDIT: The... See more...
Yes, I understand. It's just that we're trying to find out why you don't have the "Once" option available. It is strange since I don't recall any capabilities limiting your choice here.   EDIT: The problem with doing it as a part of a non-alert scheduled search (a raport) is that while you can have at least two ways of sending data (either use map to spawn the sendemail command only if there are any results or use the sendresults addon), you'd still be operating on a per-event basis or have to bend over backwards heavily to render your events manually to a single result before sending them out; that's inconvenient big time). So that's why I'm pushing for finding out why you can't alert for the whole result set once.