All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @bluewizard  Assuming that the updated field is a date field, you could set up another scheduled job to purge data older where the updated field is older than 90 days.   For example | inputloo... See more...
Hi @bluewizard  Assuming that the updated field is a date field, you could set up another scheduled job to purge data older where the updated field is older than 90 days.   For example | inputlookup suspicious_domain.csv | eval updated_epoch=strptime(updated, "<add the date format here>") ``` new field with time as epoch seconds ``` | where updated_epoch >= relative_time(now(), "-90d") | fields - updated_epoch ```|outputlook suspicious_domain.csv ``` ``` <<< uncomment when ready to overwrite the lookup file ```   Hope that helps
Hi @bluewizard  Please test this on a test lookup before running it on the original lookup.    | inputlookup append=true suspicious_domain.csv | eval TIME=strptime(_time,"%m/%d/%Y") | eval 90dAgo... See more...
Hi @bluewizard  Please test this on a test lookup before running it on the original lookup.    | inputlookup append=true suspicious_domain.csv | eval TIME=strptime(_time,"%m/%d/%Y") | eval 90dAgo=now()-(86400*90) | where TIME<90dAgo | outputlookup suspicious_domain.csv
Hi VatsalJagani, Thanks for the update. it will be useful for the user who has admin access. I am working in a organization and having only user access. Could u please help me at my level how can... See more...
Hi VatsalJagani, Thanks for the update. it will be useful for the user who has admin access. I am working in a organization and having only user access. Could u please help me at my level how can i ;earn the usecase and where to start manually... Thanks,
I have a query below that looked for an index and output to a csv file however. the size of the csv keep growing and i would like to purge it after 90 days. how do i do it?     index=suspicious_do... See more...
I have a query below that looked for an index and output to a csv file however. the size of the csv keep growing and i would like to purge it after 90 days. how do i do it?     index=suspicious_domain | rename "sources{}.source_name" as description, value as domain, last_updated as updated, mscore as weight | stats values(type) AS type latest(updated) as updated latest(weight) as weight latest(description) as description latest(associations_type) as associations_type latest(associations_name) as associations_name by domain | fields - count | outputlookup append=t suspicious_domain.csv  
ignore this question -- **kwargs count fixed the issue.
The drilldown tokens for a stacked chart will be Value of the X-axis (where x_axis_name is the name of the field on your x-axis - i.e. build info) - both of these will work. $click.value$ $row.x_ax... See more...
The drilldown tokens for a stacked chart will be Value of the X-axis (where x_axis_name is the name of the field on your x-axis - i.e. build info) - both of these will work. $click.value$ $row.x_axis_name$ Name of the X-axis column   $click.name$ Value of the clicked stacked element (duration) $click.value2$ and name of the clicked stacked element (process name)   $click.name2$  
Is there a way of capturing the x, y and z data from a stacked chart?   At the moment, my x and y are as follows x = build info y = duration z = process name. (various names stacked in the same c... See more...
Is there a way of capturing the x, y and z data from a stacked chart?   At the moment, my x and y are as follows x = build info y = duration z = process name. (various names stacked in the same column)  
You might want to consider just keeping a couple of the fields along with the Atrributes.* fields index = websphere_cct (Object= "HJn3server2" Env="Prod") OR (Object = "HJn8server3" Env="UAT") Secti... See more...
You might want to consider just keeping a couple of the fields along with the Atrributes.* fields index = websphere_cct (Object= "HJn3server2" Env="Prod") OR (Object = "HJn8server3" Env="UAT") SectionName="JVM Configuration" | table Attributes.* SectionName Object Env | foreach Attributes.* [| eval name=SectionName.".<<MATCHSEG1>>" | eval {name}='<<FIELD>>'] | fields - Attributes.* name SectionName | stats values(*) as * by Object | transpose column_name='SectionName.Attribute' header_field=Object | eval match = if('HJn3server2' == 'HJn8server3', "y", "n") The stats values(*) as * by Object will put all the values of all the fields (which don't start with _) in the same row for the same Object field value.
The whole S3 SmartStore bucket is not immutable. At least in my instance anyway. The actual index data, tsidx files, and other metadata files subtending the 'db' folder are immutable. However, da... See more...
The whole S3 SmartStore bucket is not immutable. At least in my instance anyway. The actual index data, tsidx files, and other metadata files subtending the 'db' folder are immutable. However, data model acceleration files located under the 'dma' folder do get updated/deleted as a normal part of Splunk operation. Do we have something misconfigured here? Or, is what am what I am saying normal?
for ip in ips:   query = f'search "{ip}" earliest=-1d index=main | stats count by index' job= service.jobs.create(query)     When i have 500 IP's i am only able to generate 100 jobs .. Is there... See more...
for ip in ips:   query = f'search "{ip}" earliest=-1d index=main | stats count by index' job= service.jobs.create(query)     When i have 500 IP's i am only able to generate 100 jobs .. Is there a way to generate 500 jobs ?
ITWisperer: Thank you so much for the response.  I believe what you gave me will do the job; however, I have some questions. Here is the code I used and how it turned out. For expediency, I just us... See more...
ITWisperer: Thank you so much for the response.  I believe what you gave me will do the job; however, I have some questions. Here is the code I used and how it turned out. For expediency, I just used one 'SectionName' ... this field holds the name of the section in WebSphere that holds the attributes of a section of the Application Server.  index = websphere_cct (Object= "HJn3server2" Env="Prod") OR (Object = "HJn8server3" Env="UAT") SectionName="JVM Configuration"      | foreach Attributes.*           [| eval name=SectionName.".<<MATCHSEG1>>"            | eval {name}='<<FIELD>>'] | fields - Attributes.* name SectionName | stats values(*) as * by Object | transpose column_name='SectionName.Attribute' header_field=Object | eval match = if('HJn3server2' == 'HJn8server3', "y", "n") And here are the results: My questions are: 1) How does the command 'stats values(*) as * by Object' work? The 'Object' field is the name of the application server is this case.  How does that command group the values of the Attributes by Object?  2) Why are there so many extra fields in the table? Such as: Order, OrderType, Index, etc (per below) Why do these get included and is the only way to get rid of these fields is thru the 'fields' command? (ie fields - Order - Ordertype - Index ..) Thank you very much for the help !!!
HI@arist0telis ! A percentage is number of escalations out of the total established, times 100. Or with more math notation: (BotEscalated/ChatbotEstablished) x 100 = Percentage Escalated So we... See more...
HI@arist0telis ! A percentage is number of escalations out of the total established, times 100. Or with more math notation: (BotEscalated/ChatbotEstablished) x 100 = Percentage Escalated So we convert that to eval statements. I haven't tested it below, but it should be pretty close. index=sfdc sourcetype=sfdc:conversationentry EntryType IN ("ChatbotEstablished", "BotEscalated") | stats count(eval(EntryType=='BotEscalated')) as "BEcount", count(eval(EntryType=='ChatbotEstablished')) as "CEcount" ``` get the counts``` | eval mypercentage = round(('BEcount'/'CEcount')*100, 2) ```get the percentage and round to 2 places```
I think I may have figured it out myself, just had to take a step away for a minute. Pasting what I got here in case this comes up in a Google search and someone else needs help. index=sfdc sourcety... See more...
I think I may have figured it out myself, just had to take a step away for a minute. Pasting what I got here in case this comes up in a Google search and someone else needs help. index=sfdc sourcetype=sfdc:conversationentry EntryType IN ("ChatbotEstablished", "BotEscalated") | stats count(eval(if(EntryType="ChatbotEstablished",1,null()))) as ChatCount count(eval(if(EntryType="BotEscalated",1,null()))) as EscalationCount by ConversationId | stats sum(ChatCount) as sumChat sum(EscalationCount) as sumEscalation | eval pctEscalation=round(((sumEscalation/sumChat)*100),2) | table sumChat, sumEscalation, pctEscalation
I'm working with a table of conversation data, all conversations start out as a bot chat and can be escalated to a human agent. The ConversationId remains persistent through the escalation. Each Con... See more...
I'm working with a table of conversation data, all conversations start out as a bot chat and can be escalated to a human agent. The ConversationId remains persistent through the escalation. Each ConversationEntry is a message, inbound or outbound, in a MessagingSession. ConversationId is the MessagingSession parent to the individual entries in/out All MessagingSessions I'm looking at will have an EventType=ChatbotEstablished, not all will have an EventType=BotEscalated. I can't figure out how to calculate the percentage of conversations that had an escalation. Below is my query and a stats output. I'm trying to figure out how I get BotEscalated/ChatbotEstablished. index=sfdc sourcetype=sfdc:conversationentry EntryType IN ("ChatbotEstablished", "BotEscalated") | stats count(ConversationId) as EntryCount by EntryType EntryType EntryCount BotEscalated 3 ChatbotEstablished 10
Moh, Please email the POC provided in the "Contact" tab of the splunkbase listing: https://splunkbase.splunk.com/app/5222
Currently, the Oracle Cloud Infrastructure (OCI) Logging Addon needs to be installed on a Linux-based instance, a Windows client will result in the schema error.  
I am having an issue with splunk version 9.0.4.1 it is not giving me the correct amount of license usage for my splunk instance. All the data appears as required however the license usage is not bein... See more...
I am having an issue with splunk version 9.0.4.1 it is not giving me the correct amount of license usage for my splunk instance. All the data appears as required however the license usage is not being defined giving us unlimited usage. 
All, I am having this issue with my Splunk env. I keep getting Injestion_latency_gap_multiplier has exceeded configured value. It is saying it is an issue with my indexers. Any information would hel... See more...
All, I am having this issue with my Splunk env. I keep getting Injestion_latency_gap_multiplier has exceeded configured value. It is saying it is an issue with my indexers. Any information would help I am running version 9.0.4.1.
Hi team, someone with de solution , i have updated to last version, 9.1.1, and gess what? it has the same error  Indicator 'ingestion_latency_gap_multiplier' exceeded configured value  Anyon that s... See more...
Hi team, someone with de solution , i have updated to last version, 9.1.1, and gess what? it has the same error  Indicator 'ingestion_latency_gap_multiplier' exceeded configured value  Anyon that some solution?  
@gcusello  SPL Used   index=test |rename client.userAgent.rawUserAgent as User_Agent client.geographicalContext.city as Src_City client.geographicalContext.state as src_state client.geographicalCo... See more...
@gcusello  SPL Used   index=test |rename client.userAgent.rawUserAgent as User_Agent client.geographicalContext.city as Src_City client.geographicalContext.state as src_state client.geographicalContext.country as src_country displayMessage as Threat_Description signature as Signature client.device as Client_Device client.userAgent.browser as Client_Browser | strcat "Outcome Reason: " outcome.reason ", Outcome Result: " outcome.result Outcome_Details | strcat "Source Country: " src_country ", Source State: " src_state Src_Details | eval period=if(_time>now()-86400,"Last 24 hours","Previous") | eventstats dc(period) AS period_count BY src_ip user | stats count values(period_count) AS period_count min(_time) as firstTime max(_time) as lastTime by src_ip user Signature Threat_Description Client_Device eventType Src_Details Src_City Outcome_Details User_Agent Client_Browser outcome.reason