All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you. I got this to work based on a .Net application: ReturnValue.Success().Convert.ToInt32()
Hi, I cant start the controller, I have attached the error that I am getting. Please suggest how can I solve this issue. Thanks. 
How to not send splunk report via email if no result are found .I cannot change it to alert and use number of results  >0 as I need to send it as a report with records . So I need to implement this a... See more...
How to not send splunk report via email if no result are found .I cannot change it to alert and use number of results  >0 as I need to send it as a report with records . So I need to implement this as a report only not as alert.I have gone through the existing posts but could not find a solution ?   Is there any settings in advanced Edit which could help?
Thanks, I'll contact him.
Please edit your query to use code blocks </> to format it - as it stands is almost impossible to work out what is your query - plenty of strange things in there, including a random K and a plus... See more...
Please edit your query to use code blocks </> to format it - as it stands is almost impossible to work out what is your query - plenty of strange things in there, including a random K and a plus sign and seemingly missing pipe symbols as well as missing double quotes where they would be expected and stats clauses that don't make a lot of sense.    
Hi @AL3Z, this means that the regex is working only on a subset of the data to filter, in other words there are different format logs. Analize the not matching data and modify the regex or apply an... See more...
Hi @AL3Z, this means that the regex is working only on a subset of the data to filter, in other words there are different format logs. Analize the not matching data and modify the regex or apply another one. Ciao. Giuseppe
hi @gcusello What could be the reason still I can see the blacklisted path events  but the count is reduced !!
Hi @jip31 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi Gcusello You are right, thanks
Hi @jip31, where did you try to change the time period in your alert or scheduled report? you cannot do this in Alerts (or Reports) dashboard, you have to do this in [Settings > Searches, Reports a... See more...
Hi @jip31, where did you try to change the time period in your alert or scheduled report? you cannot do this in Alerts (or Reports) dashboard, you have to do this in [Settings > Searches, Reports and Alerts]. Ciao. Giuseppe
hi I try to configure my alert with an advanced slot time like this earliest = -60m@m latest = -40m@m But when I save, splunk tell me "The changes you have done to this alert slot time will be not... See more...
hi I try to configure my alert with an advanced slot time like this earliest = -60m@m latest = -40m@m But when I save, splunk tell me "The changes you have done to this alert slot time will be not saved" and "the slot time has not been updated, Change the alert type in order to modify the slot time" what is wrong please? And i try to use the cron below, but the cron is not taken into account */20**** Thanks for your help
Hi @alexspunkshell, please try this: index=test |rename client.userAgent.rawUserAgent as User_Agent client.geographicalContext.city as Src_City client.geographicalContext.state as src_state client... See more...
Hi @alexspunkshell, please try this: index=test |rename client.userAgent.rawUserAgent as User_Agent client.geographicalContext.city as Src_City client.geographicalContext.state as src_state client.geographicalContext.country as src_country displayMessage as Threat_Description signature as Signature client.device as Client_Device client.userAgent.browser as Client_Browser | strcat "Outcome Reason: " outcome.reason ", Outcome Result: " outcome.result Outcome_Details | strcat "Source Country: " src_country ", Source State: " src_state Src_Details | eval period=if(_time>now()-86400,"Last 24 hours","Previous") | eval period=if(_time>now()-86400,"Last 24 hours","Previous") | stats count dc(period) AS period_count min(_time) as firstTime max(_time) as lastTime values(Signature) AS Signature values(Threat_Description) AS Threat_Description values(Client_Device) AS Client_Device values(eventType) AS eventType values(Src_Details) AS Src_Details values(Src_City) AS Src_City values(Outcome_Details) AS Outcome_Details values(User_Agent) AS User_Agent values(Client_Browser) AS Client_Browser values(outcome.reason) AS outcome_reason by src_ip user | where period_count=1 You can debug your search deleting the last row. Ciao. Giuseppe
Hi @Manish_Sharma, as I said, it isn't possible to give a partial access to an index, the access grants are on/off. So the only solution is creating a dedicated summary index (without additional li... See more...
Hi @Manish_Sharma, as I said, it isn't possible to give a partial access to an index, the access grants are on/off. So the only solution is creating a dedicated summary index (without additional license costs, only storage costs) to that role. Only to be more detailed: in Splunk all accesses to indexes are read only: it isn't possible to modify any data in indexes and deletion is possible only having the "can_delete" role, and anyway it's a logical deletion, not physical. Ciao. Giuseppe  
What is the point of generating 500 searches anyway (and returning possibly huge lists of results for each one).
You grant permissions on a per-index basis. That's how Splunk works. And that's one of the main reasons you separate data into multiple indexes. You can try to do some tricks to restrict visibility ... See more...
You grant permissions on a per-index basis. That's how Splunk works. And that's one of the main reasons you separate data into multiple indexes. You can try to do some tricks to restrict visibility to some data that user has access to (by using filters for a role or by giving a user only some predefined dashboards) but those are relatively easily circumventable and I wouldn't rely one them. So separating your data properly is one of the steps in architecting your environment.
Hi @gcusello ,  Thank you for your valuable response regarding this issue. The problem is that the index where the app logs are being ingested is shared or a single one for the entire platform.... See more...
Hi @gcusello ,  Thank you for your valuable response regarding this issue. The problem is that the index where the app logs are being ingested is shared or a single one for the entire platform. This means we cannot make that index read-only (RO) for a specific role only. Even if we create a different role and give it RO access to that index, the logs will still be visible to other users. Is there any other solution to this problem, or is the only solution to ingest those app logs into a different index and then apply restrictions to that specific index? Your insights and suggestions would be greatly appreciated. Logs format: index=app_platform cf_app_id:     cf_app_name: names for different apps    cf_org_id:     cf_org_name:     cf_space_id:     cf_space_name:     deployment:     event_type:     ip:     job:     job_index:     message_type:     msg: [2023-09-26 05:54:26 +0000] [185] [DEBUG] Closing connection.    origin:     source_instance: 0    source_type: APP/PROC/WEB    timestamp: 1695707666892324540 }
rex can only happen after scooping up all events.  That is why you feel slow with your second search. When match happens in search command, you only pick up that matching one.  The search is just as... See more...
rex can only happen after scooping up all events.  That is why you feel slow with your second search. When match happens in search command, you only pick up that matching one.  The search is just as your first search.  No matter whether the token is IPv4 or IPv6, search command is the same   index=vulnerability_index ip="$ip_token$"   Consider the following mock data: ip 10.10.10.12 50.10.10.17 10.10.10.23 fa00:0:0:0::1 fa00:0:0:0::2 1. $ip_token$ = fa00::1/128 Result: _time ip 2023-09-25 22:05:27 fa00:0:0:0::1   | makeresults | eval ip = split("10.10.10.12 50.10.10.17 10.10.10.23 fa00:0:0:0::1 fa00:0:0:0::2", " ") | mvexpand ip | search ip=fa00::1/128 ``` the above emulates index=vulnerability_index ip = fa00::1/128 ```   2. $ip_token$ = 10.10.10.23/32 Result: _time ip 2023-09-25 22:13:01 10.10.10.23   | makeresults | eval ip = split("10.10.10.12 50.10.10.17 10.10.10.23 fa00:0:0:0::1 fa00:0:0:0::2", " ") | mvexpand ip | search ip=10.10.10.23/32 ``` the above emulates index=vulnerability_index ip = 10.10.10.23/32 ```    
Event and Report extract rules Use the payment business events to identify Transactions which have ACCP clearing status (NPP 1012.NPP 1013) with missing Settlement Notification event NPP 1040 "NPP ... See more...
Event and Report extract rules Use the payment business events to identify Transactions which have ACCP clearing status (NPP 1012.NPP 1013) with missing Settlement Notification event NPP 1040 "NPP 1033_CR_INBOUND "NPP 1012 CECARING_INBOUND" • "NPP 1013_RETURN_INBOUND" I "NPP 1040 SETTLEMENT RECEIVED" Report should include the following fields Time from NPP 1033 TXID from NPP 1033 Amount from NPP 1012 or NPP 1013   Already i have created query    index-nch_apps_nonprod applications fis-npp source fis-npp-sit4 ((NPP 1012 CLEARING INBOUND OR NPP 1013 RETURN INBOUND) OR NPP 1033 CR INBOUND or rex field-message "eventName=\"(?<eventName> *?)\"." rex field-message "txId\"(?<txId>. *?)\," Κ I rex field-message "amt=\"(?<amt>.2)\"." rex field-message ibm.datetime-(?<ibm_datetime> *)," + Participant 1 eval Participant substr(txId,1,8) stats values(eventName) as eventName, min(ibt datetime) as Time, values(amt) as amt by (eventName, NPP 1840 SETTLEMENT RECEIVED) < 0 table Time eventName Participant amt where mycount (eventName) >= 3 AND mvfind (eventName, npp 1040) but not getting any result 
@Jana42855 - The document that I shared also contains the Splunk queries, which you may not be able to run on its own without installing the App as it will contain macros. But you will be able to run... See more...
@Jana42855 - The document that I shared also contains the Splunk queries, which you may not be able to run on its own without installing the App as it will contain macros. But you will be able to run with some modifications. Plus here is a GitHub repo for the same, which you can use to fetch more info about the use cases https://github.com/splunk/security_content  I would say pick one use case from the list (https://research.splunk.com/detections ) that you understand by looking at the name and spending time on it and you get all the generic concepts of implementing all security use cases.   I hope this helps!!!
we have splunk gateway hf that sends alerts for diskusage if more then 80% we get this alert triggered more frequently in order to resolve this issue we need to clear the space on mount point /mnt/sp... See more...
we have splunk gateway hf that sends alerts for diskusage if more then 80% we get this alert triggered more frequently in order to resolve this issue we need to clear the space on mount point /mnt/spk_fwdbck and this mount point has folder and subfolders like from last 3 years and has subfolders like  acs5x apc blackhole bpe cisco-ios oops paloalto pan_dc vpn windows unix threatgrid pan-ext ise ironport firewall f5gtmext f5-asm-tcp can this folders are safe to delete based on the year 2020 to 2023? can we delete complete previous years logs like 2020 if so does it effect anything. Trying to understad this concept. please help.