Hi @msalghamdi , if you have the list of names to check, you can put them in a lookup (called e.g. names.csv and with one field "name") and run a search like the following: index=brandprotection na...
See more...
Hi @msalghamdi , if you have the list of names to check, you can put them in a lookup (called e.g. names.csv and with one field "name") and run a search like the following: index=brandprotection name IN (ali, ahmad, elias,moayad)
| stats count BY name
| append [ | inputlookup names.csv | eval count=0 | fields name count ]
| stats sum(count) AS count BY name Ciao. Giuseppe
hello Splunkers i have a requirement where i need to show values in statistics even if it doesn't exist, for example here's my search: index=brandprotection name IN (ali, ahmad, elias,moayad) | sta...
See more...
hello Splunkers i have a requirement where i need to show values in statistics even if it doesn't exist, for example here's my search: index=brandprotection name IN (ali, ahmad, elias,moayad) | stats count by brand however sometimes in the logs Elias and Moayad names isn't there but i need to have it in the table, so i need the output to be like this user count ahmad 7 ali 4 elias 0 moayad 0 i need a search that would show the results like the table above. thanks
You can add the dependency in your app's lib folder and import it from there or you can create a requirements.txt file and declare it there and ensure its installed before installing the app.
In the salesforce app for splunk, there's a lookup you can use to get the mapping of user ids and user names. Use the following apps for ingestion of Salesforce events & objects. For stream events, u...
See more...
In the salesforce app for splunk, there's a lookup you can use to get the mapping of user ids and user names. Use the following apps for ingestion of Salesforce events & objects. For stream events, use the streaming app. Splunk Add-on for Salesforce -> https://splunkbase.splunk.com/app/3549 Splunk Add-on for Salesforce Streaming API -> https://splunkbase.splunk.com/app/5689 Splunk App for Salesforce -> https://splunkbase.splunk.com/app/1931
@ITWhisperer I tried MAX_TIMESTAMP_LOOKAHEAD value with 0 , -1 to disable the timestamp processor as per splunk docs on props.conf and also tried increasing the lookahead value to 350. But nothin...
See more...
@ITWhisperer I tried MAX_TIMESTAMP_LOOKAHEAD value with 0 , -1 to disable the timestamp processor as per splunk docs on props.conf and also tried increasing the lookahead value to 350. But nothing seems to be working.
Exactly what I was saying, you have missed a space between the "-" and the number. Try this: index="abc" sourcetype=600000304_gg_abs_ipc2 source!="/var/log/messages" "ArchivalProcessor - Total recor...
See more...
Exactly what I was saying, you have missed a space between the "-" and the number. Try this: index="abc" sourcetype=600000304_gg_abs_ipc2 source!="/var/log/messages" "ArchivalProcessor - Total records processed"| rex "Total records processed - (?<processed>\d+)"
| timechart span=1d values(processed) AS ProcessedCount
Hi @ITWhisperer .
PFB search string in code block
index="abc" sourcetype=600000304_gg_abs_ipc2 source!="/var/log/messages" "ArchivalProcessor - Total records processed"| rex "Total records proces...
See more...
Hi @ITWhisperer .
PFB search string in code block
index="abc" sourcetype=600000304_gg_abs_ipc2 source!="/var/log/messages" "ArchivalProcessor - Total records processed"| rex "Total records processed -(?<processed>\d+)"
| timechart span=1d values(processed) AS ProcessedCount
As I said before, there appears to be a space between "Total records processed -" and 27846 which doesn't appear to have been catered for in your regex Total records processed - 27846 Please share...
See more...
As I said before, there appears to be a space between "Total records processed -" and 27846 which doesn't appear to have been catered for in your regex Total records processed - 27846 Please share the search also in a code block (as above) so we can check.
I have a index with 7 sources of which I utilize 4 sources. The alert outputs data to a lookup file as its alert function and is written something like this. index=my_index source=source1 OR s...
See more...
I have a index with 7 sources of which I utilize 4 sources. The alert outputs data to a lookup file as its alert function and is written something like this. index=my_index source=source1 OR source=source2 OR source=source3 OR source=source4 stats commands eval commands table commands etc. I want to configure the alert to run only when all the four sources are present. I tried doing this. But the alert isnt running even when all 4 sources are present. Please help me on how to configure this.
@ITWhisperer I tried below query but still not able to fetch record index="abc" sourcetype=600000304_gg_abs_ipc2 source!="/var/log/messages" "ArchivalProcessor - Total records processed"| rex "...
See more...
@ITWhisperer I tried below query but still not able to fetch record index="abc" sourcetype=600000304_gg_abs_ipc2 source!="/var/log/messages" "ArchivalProcessor - Total records processed"| rex "Total records processed -(?<processed>\d+)" | timechart span=1d values(processed) AS ProcessedCount Please find below raw logs 2024-10-29 20:39:55.900 [INFO ] [pool-2-thread-1] ArchivalProcessor - Total records processed - 27846 host = lgposput50341.gso.aexp.com source = /amex/app/abs-upstreamer/logs/abs-upstreamer.log sourcetype = 600000304_gg_abs_ipc2
No, it is on the HF and indexer, UF here is only targeted for getting data in.
the configuration in HF&indexer is -
[source::asr:report]
DATATIME_CONFIG = CURRENT
@ITWhisperer Thanks for information. Yes, My actual data is in the json format. Could you please suggest what I need to do with props so the events can parse properly with timestamp filed of the e...
See more...
@ITWhisperer Thanks for information. Yes, My actual data is in the json format. Could you please suggest what I need to do with props so the events can parse properly with timestamp filed of the events.
Try extending your MAX_TIMESTAMP_LOOKAHEAD to include the part of the event containing the TRANS_DATE_TIME field (when counted from the beginning of the event data)?
I have lost count of the number of times we have suggested (requested) that event data is show in raw format (preferably in a code block using the </> button). Splunk will be processing the raw data,...
See more...
I have lost count of the number of times we have suggested (requested) that event data is show in raw format (preferably in a code block using the </> button). Splunk will be processing the raw data, not the formatted, "pretty" version you have shown us. In light of this, is your actual raw event data a JSON object, and therefore wouldn't the TIME_PREFIX be more like "time":" (perhaps with some spaces \s)?
For point 4... We will create seperate AD groups to different application teams and then we assign them and index and then we will restrict them the access to their index only. This is the idea. T...
See more...
For point 4... We will create seperate AD groups to different application teams and then we assign them and index and then we will restrict them the access to their index only. This is the idea. That is the reason, we create indexes based on the applications? Is it a good approach or any other is there to restrict them other than Index? Like 10 application data in one index and one cannot see other not possible?? Possible? Please tell me.
@ITWhisperer That timezone difference I can exclude by using TZ setting attribute in props. But I am having another issue with nano seconds. Other issue is the nano second issue.
Dear ITWhisperer, Thanks you for your suggestion, Actually, we planning to move Splunk Enterprise to new network zone, that means IPs are not statically. Then we define DNS Server to all Splunk Ins...
See more...
Dear ITWhisperer, Thanks you for your suggestion, Actually, we planning to move Splunk Enterprise to new network zone, that means IPs are not statically. Then we define DNS Server to all Splunk Instances to they can be resolve each others. Regards.