All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @man03359, the timechart command has onlòy one output not more, eventually grouprd using the BY clause. If you want more values, you have to use bin and stats: index=idx-stores-pos sourcetype=G... See more...
Hi @man03359, the timechart command has onlòy one output not more, eventually grouprd using the BY clause. If you want more values, you have to use bin and stats: index=idx-stores-pos sourcetype=GSTR:Adyen:log | transaction host startswith="Transaction started" maxpause=90s | search "*Additional Data : key - cardType*" | eval Store= substr(host,1,7) | eval Register= substr(host,8,2) | rex field=_raw "AdyenPaymentResponse.+\scardType;\svalue\s-\s(?<CardType>.+)" | eval girocard=if((CardType=="girocard"),1,0) | append [| inputlookup Stores_TimeZones.csv where Store=tkg* ] | bin span=5m _time | stats sum(girocard) AS "Girocard" latest(Country) AS Country latest(City) AS City BY _time Ciao. Giuseppe
Hi All, Hope this find you well, I have built a pretty simple search query for my dashboard, plotting line chart graph (for monitoring payments done by different debit/credit card types e.g., Giro,... See more...
Hi All, Hope this find you well, I have built a pretty simple search query for my dashboard, plotting line chart graph (for monitoring payments done by different debit/credit card types e.g., Giro, Mastercard etc. for every 5 minutes) using transaction command and then searching for the card type in the log and then extracting the value using regex in the field named "Card Type".       index=idx-stores-pos sourcetype=GSTR:Adyen:log | transaction host startswith="Transaction started" maxpause=90s | search "*Additional Data : key - cardType*" | eval Store= substr(host,1,7) | eval Register= substr(host,8,2) | rex field=_raw "AdyenPaymentResponse.+\scardType;\svalue\s-\s(?<CardType>.+)" | eval girocard=if((CardType=="girocard"),1,0) | timechart span=5m sum(girocard) AS "Girocard"     Now I have to modify the query in order to filter it out based on Country and Store, query I am using is-     index=idx-stores-pos sourcetype=GSTR:Adyen:log | transaction host startswith="Transaction started" maxpause=90s | search "*Additional Data : key - cardType*" | eval Store= substr(host,1,7) | eval Register= substr(host,8,2) | rex field=_raw "AdyenPaymentResponse.+\scardType;\svalue\s-\s(?<CardType>.+)" | eval girocard=if((CardType=="girocard"),1,0) | append [| inputlookup Stores_TimeZones.csv where Store=tkg* ] | timechart span=5m sum(girocard) AS "Girocard" latest(Country) AS Country latest(City) AS City     I am unable to get the output for Country and City, what am I doing wrong? Please help. Thanks in advance
If you go straight to sendemail command, it will execute every time, it just might send empty set of results. You could use the map command to execute a search (in this case - the sendemail one) for... See more...
If you go straight to sendemail command, it will execute every time, it just might send empty set of results. You could use the map command to execute a search (in this case - the sendemail one) for each result. Two caveats though: 1. map is considered a risky command so you need additional permissions to run it (and judging from the fact that you can't define an alert I assume you might not have those capabilities). 2. The subsearch is called for every result in your pipeline separately so if you want to just send the whole batch of your main search, you'd need to firts combine it into a single row, pass it to the map command and then "unpack" it again into multiple lines within the subsearch. A bit ugly.
@gcusello , The changes made on the DS app inputs.conf are not reflecting on the host splunk forwarder etc apps local inputs.conf file , in this case can we paste regex in  this app inputs.conf so t... See more...
@gcusello , The changes made on the DS app inputs.conf are not reflecting on the host splunk forwarder etc apps local inputs.conf file , in this case can we paste regex in  this app inputs.conf so that it can work ??
Thank you. I got this to work based on a .Net application: ReturnValue.Success().Convert.ToInt32()
Hi, I cant start the controller, I have attached the error that I am getting. Please suggest how can I solve this issue. Thanks. 
How to not send splunk report via email if no result are found .I cannot change it to alert and use number of results  >0 as I need to send it as a report with records . So I need to implement this a... See more...
How to not send splunk report via email if no result are found .I cannot change it to alert and use number of results  >0 as I need to send it as a report with records . So I need to implement this as a report only not as alert.I have gone through the existing posts but could not find a solution ?   Is there any settings in advanced Edit which could help?
Thanks, I'll contact him.
Please edit your query to use code blocks </> to format it - as it stands is almost impossible to work out what is your query - plenty of strange things in there, including a random K and a plus... See more...
Please edit your query to use code blocks </> to format it - as it stands is almost impossible to work out what is your query - plenty of strange things in there, including a random K and a plus sign and seemingly missing pipe symbols as well as missing double quotes where they would be expected and stats clauses that don't make a lot of sense.    
Hi @AL3Z, this means that the regex is working only on a subset of the data to filter, in other words there are different format logs. Analize the not matching data and modify the regex or apply an... See more...
Hi @AL3Z, this means that the regex is working only on a subset of the data to filter, in other words there are different format logs. Analize the not matching data and modify the regex or apply another one. Ciao. Giuseppe
hi @gcusello What could be the reason still I can see the blacklisted path events  but the count is reduced !!
Hi @jip31 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi Gcusello You are right, thanks
Hi @jip31, where did you try to change the time period in your alert or scheduled report? you cannot do this in Alerts (or Reports) dashboard, you have to do this in [Settings > Searches, Reports a... See more...
Hi @jip31, where did you try to change the time period in your alert or scheduled report? you cannot do this in Alerts (or Reports) dashboard, you have to do this in [Settings > Searches, Reports and Alerts]. Ciao. Giuseppe
hi I try to configure my alert with an advanced slot time like this earliest = -60m@m latest = -40m@m But when I save, splunk tell me "The changes you have done to this alert slot time will be not... See more...
hi I try to configure my alert with an advanced slot time like this earliest = -60m@m latest = -40m@m But when I save, splunk tell me "The changes you have done to this alert slot time will be not saved" and "the slot time has not been updated, Change the alert type in order to modify the slot time" what is wrong please? And i try to use the cron below, but the cron is not taken into account */20**** Thanks for your help
Hi @alexspunkshell, please try this: index=test |rename client.userAgent.rawUserAgent as User_Agent client.geographicalContext.city as Src_City client.geographicalContext.state as src_state client... See more...
Hi @alexspunkshell, please try this: index=test |rename client.userAgent.rawUserAgent as User_Agent client.geographicalContext.city as Src_City client.geographicalContext.state as src_state client.geographicalContext.country as src_country displayMessage as Threat_Description signature as Signature client.device as Client_Device client.userAgent.browser as Client_Browser | strcat "Outcome Reason: " outcome.reason ", Outcome Result: " outcome.result Outcome_Details | strcat "Source Country: " src_country ", Source State: " src_state Src_Details | eval period=if(_time>now()-86400,"Last 24 hours","Previous") | eval period=if(_time>now()-86400,"Last 24 hours","Previous") | stats count dc(period) AS period_count min(_time) as firstTime max(_time) as lastTime values(Signature) AS Signature values(Threat_Description) AS Threat_Description values(Client_Device) AS Client_Device values(eventType) AS eventType values(Src_Details) AS Src_Details values(Src_City) AS Src_City values(Outcome_Details) AS Outcome_Details values(User_Agent) AS User_Agent values(Client_Browser) AS Client_Browser values(outcome.reason) AS outcome_reason by src_ip user | where period_count=1 You can debug your search deleting the last row. Ciao. Giuseppe
Hi @Manish_Sharma, as I said, it isn't possible to give a partial access to an index, the access grants are on/off. So the only solution is creating a dedicated summary index (without additional li... See more...
Hi @Manish_Sharma, as I said, it isn't possible to give a partial access to an index, the access grants are on/off. So the only solution is creating a dedicated summary index (without additional license costs, only storage costs) to that role. Only to be more detailed: in Splunk all accesses to indexes are read only: it isn't possible to modify any data in indexes and deletion is possible only having the "can_delete" role, and anyway it's a logical deletion, not physical. Ciao. Giuseppe  
What is the point of generating 500 searches anyway (and returning possibly huge lists of results for each one).
You grant permissions on a per-index basis. That's how Splunk works. And that's one of the main reasons you separate data into multiple indexes. You can try to do some tricks to restrict visibility ... See more...
You grant permissions on a per-index basis. That's how Splunk works. And that's one of the main reasons you separate data into multiple indexes. You can try to do some tricks to restrict visibility to some data that user has access to (by using filters for a role or by giving a user only some predefined dashboards) but those are relatively easily circumventable and I wouldn't rely one them. So separating your data properly is one of the steps in architecting your environment.
Hi @gcusello ,  Thank you for your valuable response regarding this issue. The problem is that the index where the app logs are being ingested is shared or a single one for the entire platform.... See more...
Hi @gcusello ,  Thank you for your valuable response regarding this issue. The problem is that the index where the app logs are being ingested is shared or a single one for the entire platform. This means we cannot make that index read-only (RO) for a specific role only. Even if we create a different role and give it RO access to that index, the logs will still be visible to other users. Is there any other solution to this problem, or is the only solution to ingest those app logs into a different index and then apply restrictions to that specific index? Your insights and suggestions would be greatly appreciated. Logs format: index=app_platform cf_app_id:     cf_app_name: names for different apps    cf_org_id:     cf_org_name:     cf_space_id:     cf_space_name:     deployment:     event_type:     ip:     job:     job_index:     message_type:     msg: [2023-09-26 05:54:26 +0000] [185] [DEBUG] Closing connection.    origin:     source_instance: 0    source_type: APP/PROC/WEB    timestamp: 1695707666892324540 }