All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You can add the dependency in your app's lib folder and import it from there or you can create a requirements.txt file and declare it there and ensure its installed before installing the app. 
In the salesforce app for splunk, there's a lookup you can use to get the mapping of user ids and user names. Use the following apps for ingestion of Salesforce events & objects. For stream events, u... See more...
In the salesforce app for splunk, there's a lookup you can use to get the mapping of user ids and user names. Use the following apps for ingestion of Salesforce events & objects. For stream events, use the streaming app. Splunk Add-on for Salesforce -> https://splunkbase.splunk.com/app/3549  Splunk Add-on for Salesforce Streaming API -> https://splunkbase.splunk.com/app/5689 Splunk App for Salesforce -> https://splunkbase.splunk.com/app/1931 
@ITWhisperer  I tried MAX_TIMESTAMP_LOOKAHEAD value with 0 , -1 to disable the timestamp processor as per splunk docs on props.conf and also tried increasing the lookahead value to 350. But nothin... See more...
@ITWhisperer  I tried MAX_TIMESTAMP_LOOKAHEAD value with 0 , -1 to disable the timestamp processor as per splunk docs on props.conf and also tried increasing the lookahead value to 350. But nothing seems to be working. 
Yes the search covers all 4 sources, when I run the search manually and check the events I see all the 4 sources present.
Exactly what I was saying, you have missed a space between the "-" and the number. Try this: index="abc" sourcetype=600000304_gg_abs_ipc2 source!="/var/log/messages" "ArchivalProcessor - Total recor... See more...
Exactly what I was saying, you have missed a space between the "-" and the number. Try this: index="abc" sourcetype=600000304_gg_abs_ipc2 source!="/var/log/messages" "ArchivalProcessor - Total records processed"| rex "Total records processed - (?<processed>\d+)" | timechart span=1d values(processed) AS ProcessedCount
Is your search wide enough to cover events from all four sources? Does the alert trigger if you reduce it to 3?
Hi @ITWhisperer . PFB search string in code block index="abc" sourcetype=600000304_gg_abs_ipc2 source!="/var/log/messages" "ArchivalProcessor - Total records processed"| rex "Total records proces... See more...
Hi @ITWhisperer . PFB search string in code block index="abc" sourcetype=600000304_gg_abs_ipc2 source!="/var/log/messages" "ArchivalProcessor - Total records processed"| rex "Total records processed -(?<processed>\d+)" | timechart span=1d values(processed) AS ProcessedCount
As I said before, there appears to be a space between "Total records processed -" and 27846 which doesn't appear to have been catered for in your regex Total records processed - 27846  Please share... See more...
As I said before, there appears to be a space between "Total records processed -" and 27846 which doesn't appear to have been catered for in your regex Total records processed - 27846  Please share the search also in a code block (as above) so we can check.
I have a index with 7 sources of which I utilize 4 sources. The alert outputs data to a lookup file as its alert function and is written something like this. index=my_index  source=source1 OR s... See more...
I have a index with 7 sources of which I utilize 4 sources. The alert outputs data to a lookup file as its alert function and is written something like this. index=my_index  source=source1 OR source=source2 OR source=source3 OR source=source4 stats commands eval commands table commands etc. I want to configure the alert to run only when all the four sources are present. I tried doing this. But the alert isnt running even when all 4 sources are present. Please help me on how to configure this.
@ITWhisperer  I tried below query but still not able to fetch record   index="abc" sourcetype=600000304_gg_abs_ipc2 source!="/var/log/messages" "ArchivalProcessor - Total records processed"| rex "... See more...
@ITWhisperer  I tried below query but still not able to fetch record   index="abc" sourcetype=600000304_gg_abs_ipc2 source!="/var/log/messages" "ArchivalProcessor - Total records processed"| rex "Total records processed -(?<processed>\d+)" | timechart span=1d values(processed) AS ProcessedCount   Please find below raw logs  2024-10-29 20:39:55.900 [INFO ] [pool-2-thread-1] ArchivalProcessor - Total records processed - 27846 host = lgposput50341.gso.aexp.com source = /amex/app/abs-upstreamer/logs/abs-upstreamer.log sourcetype = 600000304_gg_abs_ipc2
No, it is on the HF and indexer, UF here is only targeted for getting data in. the configuration in HF&indexer is - [source::asr:report] DATATIME_CONFIG = CURRENT
@ITWhisperer  Thanks for information.  Yes, My actual data is in the json format. Could you please suggest what I need to do with props so the events can parse properly with timestamp filed of the e... See more...
@ITWhisperer  Thanks for information.  Yes, My actual data is in the json format. Could you please suggest what I need to do with props so the events can parse properly with timestamp filed of the events.
Try extending your MAX_TIMESTAMP_LOOKAHEAD to include the part of the event containing the TRANS_DATE_TIME field (when counted from the beginning of the event data)?
I have lost count of the number of times we have suggested (requested) that event data is show in raw format (preferably in a code block using the </> button). Splunk will be processing the raw data,... See more...
I have lost count of the number of times we have suggested (requested) that event data is show in raw format (preferably in a code block using the </> button). Splunk will be processing the raw data, not the formatted, "pretty" version you have shown us. In light of this, is your actual raw event data a JSON object, and therefore wouldn't the TIME_PREFIX be more like "time":" (perhaps with some spaces \s)?
For point 4... We will create seperate AD groups to different application teams and then we assign them and index and then we will restrict them the access to their index only. This is the idea.  T... See more...
For point 4... We will create seperate AD groups to different application teams and then we assign them and index and then we will restrict them the access to their index only. This is the idea.  That is the reason, we create indexes based on the applications? Is it a good approach or any other is there to restrict them other than Index? Like 10 application data in one index and one cannot see other not possible?? Possible? Please tell me.  
@ITWhisperer  That timezone difference I can exclude by using TZ setting attribute in props. But I am having another issue with nano seconds. Other issue is the nano second issue.  
Dear ITWhisperer, Thanks you for your suggestion, Actually, we planning to move Splunk Enterprise to new network zone, that means IPs are not statically. Then we define DNS Server to all Splunk Ins... See more...
Dear ITWhisperer, Thanks you for your suggestion, Actually, we planning to move Splunk Enterprise to new network zone, that means IPs are not statically. Then we define DNS Server to all Splunk Instances to they can be resolve each others. Regards.
Can you please be more descriptive on 3,4,5,6 points. I am very new to Splunk admin and still learning things. Thanks.
Hi Team, I'm trying to add customized event timestamp by extracting from raw data instead of adding current time as the event time. To achieve this I created a sourcetype with following setting... See more...
Hi Team, I'm trying to add customized event timestamp by extracting from raw data instead of adding current time as the event time. To achieve this I created a sourcetype with following settings from splunk web gui after testing in lower environment. But in production it is not functioning as expected. Raw data:  2024-11-18 09:20:10.187, STAGE_INV_TXNS_ID="xxxxxxxxx", LOC="xxxxxxx", STORE_NAME="xxxxxxx", STORE_PCODE="xxxxxxxxx", TRAN_CODE="xxxx", TRANS_TYPE="xxxxxxx", TRAN_DATE_TIME="2024-11-18 09:09:27", LAST_UPDATE_USER="xxxxxx" 2024-11-18 09:20:10.187, STAGE_INV_TXNS_ID="xxxxxxxxx", LOC="xxxxxxx", STORE_NAME="xxxxxxx", STORE_PCODE="xxxxxxxxx", TRAN_CODE="xxxx", TRANS_TYPE="xxxxxxx", TRAN_DATE_TIME="2024-11-18 09:09:27", LAST_UPDATE_USER="xxxxxx" 2024-11-18 09:20:10.187, STAGE_INV_TXNS_ID="xxxxxxxxx", LOC="xxxxxxx", STORE_NAME="xxxxxxx", STORE_PCODE="xxxxxxxxx", TRAN_CODE="xxxx", TRANS_TYPE="xxxxxxx", TRAN_DATE_TIME="2024-11-18 09:09:28", LAST_UPDATE_USER="xxxxxxx" 2024-11-18 09:20:10.187, STAGE_INV_TXNS_ID="xxxxxxxxx", LOC="xxxxxxx", STORE_NAME="xxxxxxx", STORE_PCODE="xxxxxxxxx", TRAN_CODE="xxxx", TRANS_TYPE="xxxxxxx", TRAN_DATE_TIME="2024-11-18 09:09:30", LAST_UPDATE_USER="xxxxx" I want the timestamp in TRAN_DATE_TIME field to be event timestamp. This data we are pulling from database using db connect. Could you please help us in understanding whats going wrong and how it can be corrected.
You still can use your Splunk Enterprise environment to deploy for Cloud. Just have to remember about differences. 1) You have fewer API endpoints available in the Cloud (mostly the ones needed for ... See more...
You still can use your Splunk Enterprise environment to deploy for Cloud. Just have to remember about differences. 1) You have fewer API endpoints available in the Cloud (mostly the ones needed for interacting with the Splunk environment as a whole, not the "internal administrative" ones) 2) You don't get an admin role user, most you can do is sc_admin 3) Your calls are dispatched only to SH-tier; you can't REST your indexers in the Cloud (If anyone can think of more differences, feel free to add to this list) Unless of course you want to integrate with Splunk ACS - this you won't get on-prem for obvious reasons.