All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @marnall thanks however this is not clear from support side, I've already been asked to only do debug/refresh for capabilities.
Thanks @ITWhisperer  with your splunk query currently I am able to list below url pattern only /vehicle/orders/v1/dbd20er9-g7c3-4e71-z089-gc1ga8272179 /vehicle/orders/v1/*/processInsurance /vehic... See more...
Thanks @ITWhisperer  with your splunk query currently I am able to list below url pattern only /vehicle/orders/v1/dbd20er9-g7c3-4e71-z089-gc1ga8272179 /vehicle/orders/v1/*/processInsurance /vehicle/orders/v1/*/validateInsurance /vehicle/orders/v1/*/validate /vehicle/orders/v1/*/process I missed to include 1 more pattern. /vehicle/orders/v1   (new one) Please help. Thanks in advance      
The problem was definitely in the multisearch. My original one was:  <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>opc="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter>OR</delimiter>  ... See more...
The problem was definitely in the multisearch. My original one was:  <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>opc="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter>OR</delimiter>   Based on the feedback of @bowesmana and @marnall I changed it to: <prefix>IN (</prefix> <suffix>)</suffix> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter>, </delimiter>   And further down the search:  index="pm-azlm_internal_prod_events" sourcetype="azlm" opc $opc_t$ ... | append [search index="pm-azlm_internal_dev_events" sourcetype="azlm-dev" ocp $opc_t$ ...   All is now working as expected, thank you for your support.
You know, sometimes you know something but until you really test all options you're just not sure. WIth a "count", like | tstats count WHERE index="<index>" earliest="-5min" latest=now() | `<mail_... See more...
You know, sometimes you know something but until you really test all options you're just not sure. WIth a "count", like | tstats count WHERE index="<index>" earliest="-5min" latest=now() | `<mail_macro>` | rename count as "Events" There will always be at least one result "0" (zero). It also does not matter if the count is 0 or 99999999, there is exactly 1 result. So the email macro does work, the condition "Number of results = 0" just fails, and it will fail producing false positives with ">=1" as well. I forgot about "custom trigger conditions" though, which is likely the best solution for the intended usecase. | tstats count WHERE index="<index>" earliest="-5min" latest=now() | eval Information = if(count="0", "Currently f-d","Working") | `<mail_macro>` | rename count as "Events" Then using a "custom trigger" like 'search Events = "Currently f-d"' works just as well as the solution outputing only results where there were events last x minutes but are zero events current x minutes. Probably more effective as well. Thank you both for your help, the community here is fantastic
Yes, AI can be good at making stuff up! 
@PickleRick  Actually I got the issue; in my data there is two pattern of events as mentioned below. Therefore, in  props.conf I am using TIME_PREFIX = \<\/ReceiverFmInstanceName\>\<eqtext\:EventTime... See more...
@PickleRick  Actually I got the issue; in my data there is two pattern of events as mentioned below. Therefore, in  props.conf I am using TIME_PREFIX = \<\/ReceiverFmInstanceName\>\<eqtext\:EventTime\> & TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3QZ  and because the TIME_PREFIX setting  as mentioned Splunk is picking only Pattern 1 and skipping Pattern 2 . So please suggest can I remove TIME_PREFIX setting from the props.conf so Splunk will cover or pick both the events (pattern 1 & pattern 2? ================================ Patterrn 1: In this pattern Time_Prefix is looking different  </ReceiverFmInstanceName><eqtext:EventTime>2024-08-01T21:23:37.560Z ================================= Patterrn 2: In this pattern Time_Prefix is looking different  </State><eqtext:EventTime>2024-08-01T21:23:37.560Z
Yeah, that is an option as well. However, thought it would be easier if there was an option to rename the fields while writing to the lookup table. I did an AI search which showed outputlookup had a... See more...
Yeah, that is an option as well. However, thought it would be easier if there was an option to rename the fields while writing to the lookup table. I did an AI search which showed outputlookup had a rename option, but couldn't find it in the syntax on splunk website. So was just curious as well, if it is possible at all.
Why can't you just rename it before the outputlookup and rename it back afterwards? Please expand on your usecase
Hi, I want to rename the fields while writing to a lookup table using outputlookup command. Is there a way to do it? I intend to use the lookup table in the next run of the same query so want s... See more...
Hi, I want to rename the fields while writing to a lookup table using outputlookup command. Is there a way to do it? I intend to use the lookup table in the next run of the same query so want separate field names in lookup table. Thanks in advance for the suggestions.
Thanks for the reply, I will make the corresponding changes so as not to forward the data from one indexer or another different indexer. Now the question is that my license server is living in the he... See more...
Thanks for the reply, I will make the corresponding changes so as not to forward the data from one indexer or another different indexer. Now the question is that my license server is living in the heavy Forwarder, can I have license duplication problems? Regards
Update my TYPO as:   ... It sounds no reason that my appd-server host does not have sufficient disk storage (50GB) ...
Hi Martina,     Thanks for response. It sounds no reason that my appd-server host does have sufficient disk storage (50GB) because I reserve 500GB disk for this server. To bypass this issue, I tried... See more...
Hi Martina,     Thanks for response. It sounds no reason that my appd-server host does have sufficient disk storage (50GB) because I reserve 500GB disk for this server. To bypass this issue, I tried reducing the disk storage parameter value to "5 * 1024", and it succeeded. Now I can successfully run the AppDynamics Controller. controller_data_min_disk_space_in_mb = 5 * 1024
Hi All, Just curious about why in the Application dashboard I'm not able to see the server details (Azure VMs) where the application is hosted. I have installed Machine Agents on those Servers, b... See more...
Hi All, Just curious about why in the Application dashboard I'm not able to see the server details (Azure VMs) where the application is hosted. I have installed Machine Agents on those Servers, but I am still not able to see any details in them. But I can see the servers in the Servers tab in the top column of AppDynamics. Is there any configuration I've missed during the instrumentation or installation of Agents? Any insights will be Helpful! Thanks in Advance!!!
The lispy includes. Didn't see anything else remarkable. [ OR 1 exception::1 ]  
Hi @tlmayes, What have you tried so far? If you already have a Microsoft 365 identity that represents your Splunk instance, grant the user or application the necessary Graph API permissions (Chat.Re... See more...
Hi @tlmayes, What have you tried so far? If you already have a Microsoft 365 identity that represents your Splunk instance, grant the user or application the necessary Graph API permissions (Chat.ReadBasic and ChatMessage.Send for a user or Chat.ReadBasic.All and Teamwork.Migrate.All for an application), add the identity to a chat, and give the chat a unique name, e.g. "Splunk." The basic process should be: Authenticate (https://learn.microsoft.com/en-us/graph/auth/auth-concepts). Find the chat by name by enumerating chats (https://learn.microsoft.com/en-us/graph/api/chat-list). Send a message to the chat (https://learn.microsoft.com/en-us/graph/api/chat-post-messages). I don't have an enterprise Microsoft 365 account myself, but I can help you develop an alert action script here if you don't already have something started.
Additionally, you should drop the subsearch with inputlookup because either the lookup contains any host that could ever have, or you should want to catch any count from hosts that are not in the loo... See more...
Additionally, you should drop the subsearch with inputlookup because either the lookup contains any host that could ever have, or you should want to catch any count from hosts that are not in the lookup. | tstats count max(_time) AS latest_event_time where index=firewall sourcetype="cisco:ftd" groupby host | append [|inputlookup Firewall_list.csv | table Primary | Rename Primary AS host | eval count=0] | stats sum(count) as count max(latest_event_time) AS latest_event_time by host  
@vtalanki wrote: This issue has been resolved after I have replaced server certs(server only) certs with multi-purpose certs. Posting here for the sake of others Multi-purpose cert $ openssl x... See more...
@vtalanki wrote: This issue has been resolved after I have replaced server certs(server only) certs with multi-purpose certs. Posting here for the sake of others Multi-purpose cert $ openssl x509 -noout -in multi-purpose.pem -purpose Certificate purposes: SSL client : Yes SSL server : Yes   Also running into this issue but I'm unclear as to how to make an SSL cert for Client AND Server. We generally create a request on the Linux server, then copy that into our CA server with our Linux template and it spits out a certificate. Is it something in our template we need to change to update that? Or is it in the request somehow?
Does not matter SPAN or BIN equal 10  it creates 4 or 5 buckets   I even gave bin=20  but it returned the same result. I need to agree with @PickleRick  behaviour of  bin is sort of interesting .  ... See more...
Does not matter SPAN or BIN equal 10  it creates 4 or 5 buckets   I even gave bin=20  but it returned the same result. I need to agree with @PickleRick  behaviour of  bin is sort of interesting .  But in fact documentation says : bins Syntax: bins=<int> Description: Sets the maximum number of bins to discretize into. So the Splunk decides how many bin it creates not me    
| rex "requestBody (?<requestBody>\{.*\})$" | spath input=requestBody source.collaborators.entries{}.accessible_by.name output=accessible_by.name
Try something like this | tstats count max(_time) AS latest_event_time where index=firewall sourcetype="cisco:ftd" [| inputlookup Firewall_list.csv | table Primary | Rename Primary AS host] groupby ... See more...
Try something like this | tstats count max(_time) AS latest_event_time where index=firewall sourcetype="cisco:ftd" [| inputlookup Firewall_list.csv | table Primary | Rename Primary AS host] groupby host | append [|inputlookup Firewall_list.csv | table Primary | Rename Primary AS host | eval count=0] | stats sum(count) as count max(latest_event_time) AS latest_event_time by host