All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@ITWhisperer  Thanks for information.  Yes, My actual data is in the json format. Could you please suggest what I need to do with props so the events can parse properly with timestamp filed of the e... See more...
@ITWhisperer  Thanks for information.  Yes, My actual data is in the json format. Could you please suggest what I need to do with props so the events can parse properly with timestamp filed of the events.
Try extending your MAX_TIMESTAMP_LOOKAHEAD to include the part of the event containing the TRANS_DATE_TIME field (when counted from the beginning of the event data)?
I have lost count of the number of times we have suggested (requested) that event data is show in raw format (preferably in a code block using the </> button). Splunk will be processing the raw data,... See more...
I have lost count of the number of times we have suggested (requested) that event data is show in raw format (preferably in a code block using the </> button). Splunk will be processing the raw data, not the formatted, "pretty" version you have shown us. In light of this, is your actual raw event data a JSON object, and therefore wouldn't the TIME_PREFIX be more like "time":" (perhaps with some spaces \s)?
For point 4... We will create seperate AD groups to different application teams and then we assign them and index and then we will restrict them the access to their index only. This is the idea.  T... See more...
For point 4... We will create seperate AD groups to different application teams and then we assign them and index and then we will restrict them the access to their index only. This is the idea.  That is the reason, we create indexes based on the applications? Is it a good approach or any other is there to restrict them other than Index? Like 10 application data in one index and one cannot see other not possible?? Possible? Please tell me.  
@ITWhisperer  That timezone difference I can exclude by using TZ setting attribute in props. But I am having another issue with nano seconds. Other issue is the nano second issue.  
Dear ITWhisperer, Thanks you for your suggestion, Actually, we planning to move Splunk Enterprise to new network zone, that means IPs are not statically. Then we define DNS Server to all Splunk Ins... See more...
Dear ITWhisperer, Thanks you for your suggestion, Actually, we planning to move Splunk Enterprise to new network zone, that means IPs are not statically. Then we define DNS Server to all Splunk Instances to they can be resolve each others. Regards.
Can you please be more descriptive on 3,4,5,6 points. I am very new to Splunk admin and still learning things. Thanks.
Hi Team, I'm trying to add customized event timestamp by extracting from raw data instead of adding current time as the event time. To achieve this I created a sourcetype with following setting... See more...
Hi Team, I'm trying to add customized event timestamp by extracting from raw data instead of adding current time as the event time. To achieve this I created a sourcetype with following settings from splunk web gui after testing in lower environment. But in production it is not functioning as expected. Raw data:  2024-11-18 09:20:10.187, STAGE_INV_TXNS_ID="xxxxxxxxx", LOC="xxxxxxx", STORE_NAME="xxxxxxx", STORE_PCODE="xxxxxxxxx", TRAN_CODE="xxxx", TRANS_TYPE="xxxxxxx", TRAN_DATE_TIME="2024-11-18 09:09:27", LAST_UPDATE_USER="xxxxxx" 2024-11-18 09:20:10.187, STAGE_INV_TXNS_ID="xxxxxxxxx", LOC="xxxxxxx", STORE_NAME="xxxxxxx", STORE_PCODE="xxxxxxxxx", TRAN_CODE="xxxx", TRANS_TYPE="xxxxxxx", TRAN_DATE_TIME="2024-11-18 09:09:27", LAST_UPDATE_USER="xxxxxx" 2024-11-18 09:20:10.187, STAGE_INV_TXNS_ID="xxxxxxxxx", LOC="xxxxxxx", STORE_NAME="xxxxxxx", STORE_PCODE="xxxxxxxxx", TRAN_CODE="xxxx", TRANS_TYPE="xxxxxxx", TRAN_DATE_TIME="2024-11-18 09:09:28", LAST_UPDATE_USER="xxxxxxx" 2024-11-18 09:20:10.187, STAGE_INV_TXNS_ID="xxxxxxxxx", LOC="xxxxxxx", STORE_NAME="xxxxxxx", STORE_PCODE="xxxxxxxxx", TRAN_CODE="xxxx", TRANS_TYPE="xxxxxxx", TRAN_DATE_TIME="2024-11-18 09:09:30", LAST_UPDATE_USER="xxxxx" I want the timestamp in TRAN_DATE_TIME field to be event timestamp. This data we are pulling from database using db connect. Could you please help us in understanding whats going wrong and how it can be corrected.
You still can use your Splunk Enterprise environment to deploy for Cloud. Just have to remember about differences. 1) You have fewer API endpoints available in the Cloud (mostly the ones needed for ... See more...
You still can use your Splunk Enterprise environment to deploy for Cloud. Just have to remember about differences. 1) You have fewer API endpoints available in the Cloud (mostly the ones needed for interacting with the Splunk environment as a whole, not the "internal administrative" ones) 2) You don't get an admin role user, most you can do is sc_admin 3) Your calls are dispatched only to SH-tier; you can't REST your indexers in the Cloud (If anyone can think of more differences, feel free to add to this list) Unless of course you want to integrate with Splunk ACS - this you won't get on-prem for obvious reasons.
These are private ip addresses (and therefore little point redacting them as they are not reachable from outside your network!). Are the ip address statically assigned? If so, you could add a name re... See more...
These are private ip addresses (and therefore little point redacting them as they are not reachable from outside your network!). Are the ip address statically assigned? If so, you could add a name resolution for them to your /etc/hosts file (or similar) so that a name lookup would resolve the address?
So the time is out by exactly 5 hours which represents your timezone, therefore it is correct. Are there any other discrepancies apart from this (which is now accounted for)?
Wait, but if your local timezone is EST and your profile is configured with EST, that's actually the proper timestamp. The source is reporting 14:15 UTC so it's 9:15 EST
Hold up there. You're mixing different things. 1. Deployment server is a component used to distribute apps to forwarders, sometimes standalone indexers or standalone search heads. It is _not_ used ... See more...
Hold up there. You're mixing different things. 1. Deployment server is a component used to distribute apps to forwarders, sometimes standalone indexers or standalone search heads. It is _not_ used for managing clustered indexers! 2. You don't send data to the CM! CM manages configuration and state of the indexers but isn't involved in indexing and/or processing the incoming data 3. I have no idea why you're extracting the fqdn as indexed field. (true, if you're often doing tstats over it, it can make sense but you also probably normalize your data to CIM so you can do tstats over the dataset). 4. Are you sure you need so many indexes (just asking - maybe you indeed do; but people tend to be "trigger-happy" with creating too many indexes). 5. I think you should overwrite the index field with := rather than simply assign a new value with = 6. You know it will be slow, right? Why not do it one step earlier - on your syslog daemon?  
Dear splunkers, Through tuning Splunk Enterprise, we required to change every connection through Splunk Instances from IP Address to Domain Name. Everything from server.conf are done except this. So... See more...
Dear splunkers, Through tuning Splunk Enterprise, we required to change every connection through Splunk Instances from IP Address to Domain Name. Everything from server.conf are done except this. So, is possible to change these Peers URI from IP Address to Domain Name and where can we find this configuration ? Thanks & best regards, Benny On  
Hi @hahhhaxin , it's really difficoult to red the output of the btool, have this configuration on the UF? I don't see DATATIME_CONFIG = CURRENT in your output on the UF. Ciao. Giuseppe
@PickleRick I already tried and added attribute under props  but this also not working.  "TIMESTAMP_FIELDS = time" and added KV_MODE=json 
Hi @fahimeh , you have to use the add on "CCX Add-on for ManageEngine Products (ADAudit Plus)" (https://splunkbase.splunk.com/app/7004) from splunkbase and follow the instructions. Ciao. Giuseppe
It kinda makes sense. With SAML authentication you don't actually authenticate against the SP but against the IdP and then pass the assertions around. How do you expect it to work when you don't aut... See more...
It kinda makes sense. With SAML authentication you don't actually authenticate against the SP but against the IdP and then pass the assertions around. How do you expect it to work when you don't authenticate against the IdP?
I'm not 100% sure if "normal" time extraction works with indexed extractions. You could try setting TIMESTAMP_FIELDS Also - why indexed extractions? Why not just KV_MODE=json?
@ITWhisperer On EST time the server is .   @bowesmana I have tried below settings but nothings works for me. Is there any workaround I need to apply. CHARSET = UTF-8 #AUTO_KV_JSON = false DATE... See more...
@ITWhisperer On EST time the server is .   @bowesmana I have tried below settings but nothings works for me. Is there any workaround I need to apply. CHARSET = UTF-8 #AUTO_KV_JSON = false DATETIME_CONFIG = #INDEXED_EXTRACTIONS = json KV_MODE = json LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true MAX_TIMESTAMP_LOOKAHEAD = 550 TIME_PREFIX = time:\s+ TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%Z category = Custom pulldown_type = true Example Pattern of event : { [-] classofpayload: com.v.decanter.deca.generic.domain.command.PurgeCommand data: { [-] batchSize: 1000 retentionMinutes: 43200 windowDurationSeconds: 600 } datacontenttype: application/json id: 32e31ec6-2362-4b46-966e-ec4bdbb3llbe messages: [ [-] ] source: decanter-scheduler spanid: 0000000000000000 specversion: 1.0 time: 2024-11-18T04:15:00.057785Z traceid: 00000000000000000000000000000000 type: PurgeEventOutbox }