All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

rex can only happen after scooping up all events.  That is why you feel slow with your second search. When match happens in search command, you only pick up that matching one.  The search is just as... See more...
rex can only happen after scooping up all events.  That is why you feel slow with your second search. When match happens in search command, you only pick up that matching one.  The search is just as your first search.  No matter whether the token is IPv4 or IPv6, search command is the same   index=vulnerability_index ip="$ip_token$"   Consider the following mock data: ip 10.10.10.12 50.10.10.17 10.10.10.23 fa00:0:0:0::1 fa00:0:0:0::2 1. $ip_token$ = fa00::1/128 Result: _time ip 2023-09-25 22:05:27 fa00:0:0:0::1   | makeresults | eval ip = split("10.10.10.12 50.10.10.17 10.10.10.23 fa00:0:0:0::1 fa00:0:0:0::2", " ") | mvexpand ip | search ip=fa00::1/128 ``` the above emulates index=vulnerability_index ip = fa00::1/128 ```   2. $ip_token$ = 10.10.10.23/32 Result: _time ip 2023-09-25 22:13:01 10.10.10.23   | makeresults | eval ip = split("10.10.10.12 50.10.10.17 10.10.10.23 fa00:0:0:0::1 fa00:0:0:0::2", " ") | mvexpand ip | search ip=10.10.10.23/32 ``` the above emulates index=vulnerability_index ip = 10.10.10.23/32 ```    
Event and Report extract rules Use the payment business events to identify Transactions which have ACCP clearing status (NPP 1012.NPP 1013) with missing Settlement Notification event NPP 1040 "NPP ... See more...
Event and Report extract rules Use the payment business events to identify Transactions which have ACCP clearing status (NPP 1012.NPP 1013) with missing Settlement Notification event NPP 1040 "NPP 1033_CR_INBOUND "NPP 1012 CECARING_INBOUND" • "NPP 1013_RETURN_INBOUND" I "NPP 1040 SETTLEMENT RECEIVED" Report should include the following fields Time from NPP 1033 TXID from NPP 1033 Amount from NPP 1012 or NPP 1013   Already i have created query    index-nch_apps_nonprod applications fis-npp source fis-npp-sit4 ((NPP 1012 CLEARING INBOUND OR NPP 1013 RETURN INBOUND) OR NPP 1033 CR INBOUND or rex field-message "eventName=\"(?<eventName> *?)\"." rex field-message "txId\"(?<txId>. *?)\," Κ I rex field-message "amt=\"(?<amt>.2)\"." rex field-message ibm.datetime-(?<ibm_datetime> *)," + Participant 1 eval Participant substr(txId,1,8) stats values(eventName) as eventName, min(ibt datetime) as Time, values(amt) as amt by (eventName, NPP 1840 SETTLEMENT RECEIVED) < 0 table Time eventName Participant amt where mycount (eventName) >= 3 AND mvfind (eventName, npp 1040) but not getting any result 
@Jana42855 - The document that I shared also contains the Splunk queries, which you may not be able to run on its own without installing the App as it will contain macros. But you will be able to run... See more...
@Jana42855 - The document that I shared also contains the Splunk queries, which you may not be able to run on its own without installing the App as it will contain macros. But you will be able to run with some modifications. Plus here is a GitHub repo for the same, which you can use to fetch more info about the use cases https://github.com/splunk/security_content  I would say pick one use case from the list (https://research.splunk.com/detections ) that you understand by looking at the name and spending time on it and you get all the generic concepts of implementing all security use cases.   I hope this helps!!!
we have splunk gateway hf that sends alerts for diskusage if more then 80% we get this alert triggered more frequently in order to resolve this issue we need to clear the space on mount point /mnt/sp... See more...
we have splunk gateway hf that sends alerts for diskusage if more then 80% we get this alert triggered more frequently in order to resolve this issue we need to clear the space on mount point /mnt/spk_fwdbck and this mount point has folder and subfolders like from last 3 years and has subfolders like  acs5x apc blackhole bpe cisco-ios oops paloalto pan_dc vpn windows unix threatgrid pan-ext ise ironport firewall f5gtmext f5-asm-tcp can this folders are safe to delete based on the year 2020 to 2023? can we delete complete previous years logs like 2020 if so does it effect anything. Trying to understad this concept. please help.
Can we consider using socket packages and using the correct "try except" and timeout for filtering, which may be faster? Alternatively, asynchrony can be used, but most importantly, each event must h... See more...
Can we consider using socket packages and using the correct "try except" and timeout for filtering, which may be faster? Alternatively, asynchrony can be used, but most importantly, each event must have a timestamp
I guess you don't have permission of root directory of C drive, because it's worked when you place file into "C:\Program Files",  just check if you can create the new file (instead of folder) in roo ... See more...
I guess you don't have permission of root directory of C drive, because it's worked when you place file into "C:\Program Files",  just check if you can create the new file (instead of folder) in roo directory of C drive. I guess you should only have permission to create folders now.
Hi @bluewizard  Assuming that the updated field is a date field, you could set up another scheduled job to purge data older where the updated field is older than 90 days.   For example | inputloo... See more...
Hi @bluewizard  Assuming that the updated field is a date field, you could set up another scheduled job to purge data older where the updated field is older than 90 days.   For example | inputlookup suspicious_domain.csv | eval updated_epoch=strptime(updated, "<add the date format here>") ``` new field with time as epoch seconds ``` | where updated_epoch >= relative_time(now(), "-90d") | fields - updated_epoch ```|outputlook suspicious_domain.csv ``` ``` <<< uncomment when ready to overwrite the lookup file ```   Hope that helps
Hi @bluewizard  Please test this on a test lookup before running it on the original lookup.    | inputlookup append=true suspicious_domain.csv | eval TIME=strptime(_time,"%m/%d/%Y") | eval 90dAgo... See more...
Hi @bluewizard  Please test this on a test lookup before running it on the original lookup.    | inputlookup append=true suspicious_domain.csv | eval TIME=strptime(_time,"%m/%d/%Y") | eval 90dAgo=now()-(86400*90) | where TIME<90dAgo | outputlookup suspicious_domain.csv
Hi VatsalJagani, Thanks for the update. it will be useful for the user who has admin access. I am working in a organization and having only user access. Could u please help me at my level how can... See more...
Hi VatsalJagani, Thanks for the update. it will be useful for the user who has admin access. I am working in a organization and having only user access. Could u please help me at my level how can i ;earn the usecase and where to start manually... Thanks,
I have a query below that looked for an index and output to a csv file however. the size of the csv keep growing and i would like to purge it after 90 days. how do i do it?     index=suspicious_do... See more...
I have a query below that looked for an index and output to a csv file however. the size of the csv keep growing and i would like to purge it after 90 days. how do i do it?     index=suspicious_domain | rename "sources{}.source_name" as description, value as domain, last_updated as updated, mscore as weight | stats values(type) AS type latest(updated) as updated latest(weight) as weight latest(description) as description latest(associations_type) as associations_type latest(associations_name) as associations_name by domain | fields - count | outputlookup append=t suspicious_domain.csv  
ignore this question -- **kwargs count fixed the issue.
The drilldown tokens for a stacked chart will be Value of the X-axis (where x_axis_name is the name of the field on your x-axis - i.e. build info) - both of these will work. $click.value$ $row.x_ax... See more...
The drilldown tokens for a stacked chart will be Value of the X-axis (where x_axis_name is the name of the field on your x-axis - i.e. build info) - both of these will work. $click.value$ $row.x_axis_name$ Name of the X-axis column   $click.name$ Value of the clicked stacked element (duration) $click.value2$ and name of the clicked stacked element (process name)   $click.name2$  
Is there a way of capturing the x, y and z data from a stacked chart?   At the moment, my x and y are as follows x = build info y = duration z = process name. (various names stacked in the same c... See more...
Is there a way of capturing the x, y and z data from a stacked chart?   At the moment, my x and y are as follows x = build info y = duration z = process name. (various names stacked in the same column)  
You might want to consider just keeping a couple of the fields along with the Atrributes.* fields index = websphere_cct (Object= "HJn3server2" Env="Prod") OR (Object = "HJn8server3" Env="UAT") Secti... See more...
You might want to consider just keeping a couple of the fields along with the Atrributes.* fields index = websphere_cct (Object= "HJn3server2" Env="Prod") OR (Object = "HJn8server3" Env="UAT") SectionName="JVM Configuration" | table Attributes.* SectionName Object Env | foreach Attributes.* [| eval name=SectionName.".<<MATCHSEG1>>" | eval {name}='<<FIELD>>'] | fields - Attributes.* name SectionName | stats values(*) as * by Object | transpose column_name='SectionName.Attribute' header_field=Object | eval match = if('HJn3server2' == 'HJn8server3', "y", "n") The stats values(*) as * by Object will put all the values of all the fields (which don't start with _) in the same row for the same Object field value.
The whole S3 SmartStore bucket is not immutable. At least in my instance anyway. The actual index data, tsidx files, and other metadata files subtending the 'db' folder are immutable. However, da... See more...
The whole S3 SmartStore bucket is not immutable. At least in my instance anyway. The actual index data, tsidx files, and other metadata files subtending the 'db' folder are immutable. However, data model acceleration files located under the 'dma' folder do get updated/deleted as a normal part of Splunk operation. Do we have something misconfigured here? Or, is what am what I am saying normal?
for ip in ips:   query = f'search "{ip}" earliest=-1d index=main | stats count by index' job= service.jobs.create(query)     When i have 500 IP's i am only able to generate 100 jobs .. Is there... See more...
for ip in ips:   query = f'search "{ip}" earliest=-1d index=main | stats count by index' job= service.jobs.create(query)     When i have 500 IP's i am only able to generate 100 jobs .. Is there a way to generate 500 jobs ?
ITWisperer: Thank you so much for the response.  I believe what you gave me will do the job; however, I have some questions. Here is the code I used and how it turned out. For expediency, I just us... See more...
ITWisperer: Thank you so much for the response.  I believe what you gave me will do the job; however, I have some questions. Here is the code I used and how it turned out. For expediency, I just used one 'SectionName' ... this field holds the name of the section in WebSphere that holds the attributes of a section of the Application Server.  index = websphere_cct (Object= "HJn3server2" Env="Prod") OR (Object = "HJn8server3" Env="UAT") SectionName="JVM Configuration"      | foreach Attributes.*           [| eval name=SectionName.".<<MATCHSEG1>>"            | eval {name}='<<FIELD>>'] | fields - Attributes.* name SectionName | stats values(*) as * by Object | transpose column_name='SectionName.Attribute' header_field=Object | eval match = if('HJn3server2' == 'HJn8server3', "y", "n") And here are the results: My questions are: 1) How does the command 'stats values(*) as * by Object' work? The 'Object' field is the name of the application server is this case.  How does that command group the values of the Attributes by Object?  2) Why are there so many extra fields in the table? Such as: Order, OrderType, Index, etc (per below) Why do these get included and is the only way to get rid of these fields is thru the 'fields' command? (ie fields - Order - Ordertype - Index ..) Thank you very much for the help !!!
HI@arist0telis ! A percentage is number of escalations out of the total established, times 100. Or with more math notation: (BotEscalated/ChatbotEstablished) x 100 = Percentage Escalated So we... See more...
HI@arist0telis ! A percentage is number of escalations out of the total established, times 100. Or with more math notation: (BotEscalated/ChatbotEstablished) x 100 = Percentage Escalated So we convert that to eval statements. I haven't tested it below, but it should be pretty close. index=sfdc sourcetype=sfdc:conversationentry EntryType IN ("ChatbotEstablished", "BotEscalated") | stats count(eval(EntryType=='BotEscalated')) as "BEcount", count(eval(EntryType=='ChatbotEstablished')) as "CEcount" ``` get the counts``` | eval mypercentage = round(('BEcount'/'CEcount')*100, 2) ```get the percentage and round to 2 places```
I think I may have figured it out myself, just had to take a step away for a minute. Pasting what I got here in case this comes up in a Google search and someone else needs help. index=sfdc sourcety... See more...
I think I may have figured it out myself, just had to take a step away for a minute. Pasting what I got here in case this comes up in a Google search and someone else needs help. index=sfdc sourcetype=sfdc:conversationentry EntryType IN ("ChatbotEstablished", "BotEscalated") | stats count(eval(if(EntryType="ChatbotEstablished",1,null()))) as ChatCount count(eval(if(EntryType="BotEscalated",1,null()))) as EscalationCount by ConversationId | stats sum(ChatCount) as sumChat sum(EscalationCount) as sumEscalation | eval pctEscalation=round(((sumEscalation/sumChat)*100),2) | table sumChat, sumEscalation, pctEscalation
I'm working with a table of conversation data, all conversations start out as a bot chat and can be escalated to a human agent. The ConversationId remains persistent through the escalation. Each Con... See more...
I'm working with a table of conversation data, all conversations start out as a bot chat and can be escalated to a human agent. The ConversationId remains persistent through the escalation. Each ConversationEntry is a message, inbound or outbound, in a MessagingSession. ConversationId is the MessagingSession parent to the individual entries in/out All MessagingSessions I'm looking at will have an EventType=ChatbotEstablished, not all will have an EventType=BotEscalated. I can't figure out how to calculate the percentage of conversations that had an escalation. Below is my query and a stats output. I'm trying to figure out how I get BotEscalated/ChatbotEstablished. index=sfdc sourcetype=sfdc:conversationentry EntryType IN ("ChatbotEstablished", "BotEscalated") | stats count(ConversationId) as EntryCount by EntryType EntryType EntryCount BotEscalated 3 ChatbotEstablished 10