All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This is so vague - there can be a gazillion of reasons for you not being able to connect. Does your desktop meet the minimum parameters for Splunk installation? Is the splunkd process running? Do yo... See more...
This is so vague - there can be a gazillion of reasons for you not being able to connect. Does your desktop meet the minimum parameters for Splunk installation? Is the splunkd process running? Do you see errors in $SPLUNK_HOME/var/log/splunk/splunkd.log? Did you change anything in your computer's configuration (most importantly network/firewall settings)?
It doesn't work like that. For TA_auditd to work you ingest contents of /var/log/audit/auditd.log in text form. The settings you're trying to manipulate do completely different things - they tell Sp... See more...
It doesn't work like that. For TA_auditd to work you ingest contents of /var/log/audit/auditd.log in text form. The settings you're trying to manipulate do completely different things - they tell Splunk how to _interpret_ the received data. You can't use them to make json from plain text or something like that.
please how do i hashout as proposed by the solution  
This is a 7 years old thread. You'd get much more visibility if you posted your question as a new thread (possibly dropping in a link to this thread for reference if it's relevant to your case).
I really appreciate your advices, Thank you for discussion
Sure. Whatever rocks your boat But seriously - it's like ITIL - adopt and adapt. If something works for you and you are aware of your approach's limitations - go ahead.  
For this situation, we have a weekly alert that shows "missing hosts" | tstats latest(_time) as latest where NOT index=main AND NOT index="*-summary" earliest=-30d by index, host | eval DeltaSeconds... See more...
For this situation, we have a weekly alert that shows "missing hosts" | tstats latest(_time) as latest where NOT index=main AND NOT index="*-summary" earliest=-30d by index, host | eval DeltaSeconds = now() - latest | where DeltaSeconds>604800 | eval LastEventTime = strftime(latest,"%Y-%m-%d %H:%M:%S") | eval DeltaHours = round(DeltaSeconds/3600) | eval DeltaDays = round(DeltaHours/24) | join index [| inputlookup generated_file_with_admins_mails.csv] | table index, host, LastEventTime, DeltaHours, DeltaDays, email_to Using the sendresults app, this Splunk alerts the responsible employee(s) about these hosts. Now this search shows only hosts that haven't sent Syslog for more than 7 days and that's OK for us In most cases, this alert shows only hosts that we removed from our infrastructure But if it will be necessary, I can run this alert more frequently or separate it into several searches with different "missing" conditions I understand that this approach cannot handle, for example, some intermittent network or software lags, but I have used this approach for about a year and all is quite fine, excluding some rare cases (like this topic)
While the general question is of course valid and needs to be considered properly, I saw similar cases in my experience - splitting data from a single source into separate indexes. The most typical ... See more...
While the general question is of course valid and needs to be considered properly, I saw similar cases in my experience - splitting data from a single source into separate indexes. The most typical case is when you have a single solution providing logs for separate business entities (like a central security appliance protecting multiple divisions or even companies from a single business group). You might want to split events so that each unit has access only to its own events (possibly with some overseeing security team having access to all those indexes). So there are valid use cases for similar setups
Hello richgalloway I've checked for the splunkd.log for the past 1 week - no errors found. 8080 is closed even on the old clusters but we never had troubles with Replication and Search Factor.  98... See more...
Hello richgalloway I've checked for the splunkd.log for the past 1 week - no errors found. 8080 is closed even on the old clusters but we never had troubles with Replication and Search Factor.  9887 and 8089 ports are all open across all the clusters. But still the fixup tasks pending - 301 Fixup tasks - In progress - 0
@anooshac  Can you please share your full sample code? KV
Your data illustration strongly suggest that it is part of a JSON event like,     {"message":"sypher:[tokenized] build successful -\xxxxy {\"data\":{\"account_id\":\"ABC123XYZ\",\"activity\":{\"ti... See more...
Your data illustration strongly suggest that it is part of a JSON event like,     {"message":"sypher:[tokenized] build successful -\xxxxy {\"data\":{\"account_id\":\"ABC123XYZ\",\"activity\":{\"time\":\"2024-05-31T12:37:25Z\"}}", "some_field":"somevalue", "some_other_field": "morevalue"}     In this case, Splunk should have given you a field named "message"  that has this value:      "message":"sypher:[tokenized] build successful -\xxxxy {\"data\":{\"account_id\":\"ABC123XYZ\",\"activity\":{\"time\":\"2024-05-31T12:37:25Z\"}}"     What the developer is trying to do is to embed more data in this field, partially also in JSON.  For long-term maintainability, it is best not to treat that as text, either.  This means that regex is not the right tool for the job.  Instead,  try to get the embedded JSON first. There is just one problem (in addition to missing a closing double quote for the time value): the string \xxxxy is illegal in JSON.  If this is the real data, Splunk would have bailed and NOT give you a field named "message".  In that case, you will have to deal with that first.  Let's explore how later. For now, suppose your data is actually   {"message":"sypher:[tokenized] build successful -\\\xxxxy {\"data\":{\"account_id\":\"ABC123XYZ\",\"activity\":{\"time\":\"2024-05-31T12:37:25Z\"}}", "some_field":"somevalue", "some_other_field": "morevalue"}   As such, Splunk would have given you a value for message like this:   sypher:[tokenized] build successful -\xxxxy {"data":{"account_id":"ABC123XYZ","activity":{"time":"2024-05-31T12:37:25Z"}}   Consequently, all you need to do is   | eval jmessage = replace(message, "^[^{]+", "") | spath input=jmessage   You will get the following fields data.account_id data.activity.time some_field some_other_field ABC123XYZ 2024-05-31T12:37:25Z somevalue morevalue Here is an emulation of the "correct" data you can play with and compare with real data   | makeresults | eval _raw = "{\"message\":\"sypher:[tokenized] build successful -\\\xxxxy {\\\"data\\\":{\\\"account_id\\\":\\\"ABC123XYZ\\\",\\\"activity\\\":{\\\"time\\\":\\\"2024-05-31T12:37:25Z\\\"}}\", \"some_field\":\"somevalue\", \"some_other_field\": \"morevalue\"}" | spath ``` data emulation above ```   Now, if your raw data indeed contains \xxxxy inside a JSON block, you can still rectify that with text manipulation so you get a legal JSON.  But you have to tell your developer that they are logging bad JSON. (Recently there was a case where an IBM mainframe plugin sent Splunk bad data like this.  It is best for the developer to fix this kind of problem.)
Hi @Splunk-Star , Let me understand: this is one of your events, if you have many events, they are displayed, if you have only one event, it isn't displayed, is it correct? have you this issu... See more...
Hi @Splunk-Star , Let me understand: this is one of your events, if you have many events, they are displayed, if you have only one event, it isn't displayed, is it correct? have you this issue only with these logs or also with other logs? Maybe the issue is related to the lenght of the log, I encountered an issue with very long logs, that were displayed with a very long delay for their dimension. Did you tried to truncate it e.g. with substr: index = "*" "a39d0417-bc8e-41fd-ae1f-7ed5566caed6" "*uploadstat*" status=Processed | eval _raw=substr(_raw,100) Ciao. Giuseppe  
Hi @iam_ironman , only one question: why? indexes aren't database tables, indexes are containers where logs are stored, the log categorization is done with sourcetype field. usually custom indexes... See more...
Hi @iam_ironman , only one question: why? indexes aren't database tables, indexes are containers where logs are stored, the log categorization is done with sourcetype field. usually custom indexes are mainly created when there are different requirements about retention and grant accesses and secondary for different log volumes. So why do you want to create so many indexes, that you have to maintain and that after a retention time, will be empty? Enyway, the rex you used is wrong, you don't need to extract the index field to assign a dinamic value to this field, you have to identify a group and use it for the index value: [new_index] SOURCE_KEY = MetaData:Source REGEX = ^(\w+)\-\d+ FORMAT = $1 DEST_KEY = _MetaData:Index Ciao. Giuseppe
I have a set of DCs from where i need to monitor the Device logs which is located in a shared path.. I tried entering the below stanzas for each server and DC separately which worked. But when I am ... See more...
I have a set of DCs from where i need to monitor the Device logs which is located in a shared path.. I tried entering the below stanzas for each server and DC separately which worked. But when I am trying to standardise this monitoring with a pattern to avoid pushing the configs each time, it did not work. Can you let me know where its going wrong?? [monitor://\\azwvocasp00005\PRDC_DeviceLogs\DeviceLogs] disabled = 0 recursive = true sourcetype = Vocollect:DeviceLog index = rpl_winos_application_prod Now am trying: [monitor://\\azwvocasp000*\*DC_DeviceLogs\DeviceLogs] disabled = 0 recursive = true sourcetype = Vocollect:DeviceLog index = rpl_winos_application_prod Thanks in Advance  
I did actually..  Just found out that you need to try with the indexer instead of Search Head. And also, attach an IAM role to your Splunk server, and  use the ARN of that same role to attach to your... See more...
I did actually..  Just found out that you need to try with the indexer instead of Search Head. And also, attach an IAM role to your Splunk server, and  use the ARN of that same role to attach to your Splunk config.
I do not know much about Cribl, but these settings in props.conf might help: props.conf on UF: [test] EVENT_BREAKER_ENABLE=true EVENT_BREAKER=([\r\n]+)\{ \"__CURSOR\" props.conf on Indexer: (assum... See more...
I do not know much about Cribl, but these settings in props.conf might help: props.conf on UF: [test] EVENT_BREAKER_ENABLE=true EVENT_BREAKER=([\r\n]+)\{ \"__CURSOR\" props.conf on Indexer: (assuming REALTIME_TIMESTAMP is the timestamp field) [test] KV_MODE=JSON SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\{ \"__CURSOR\" MUST_BREAK_AFTER=\} TIME_PREFIX=\"__REALTIME_TIMESTAMP\"\s\:\s\" TIME_FORMAT=%s%6N MAX_TIMESTAMP_LOOKAHEAD=18  
yes, the data is sent from the Splunk UF --> Cribl (Stream / Worker) --> Splunk Indexer
So after much thought and deliberation, this is how you can see the real-time EPS and the trends around it on UBA. 1. You would need to add the parameter ?system in the url right before the # valu... See more...
So after much thought and deliberation, this is how you can see the real-time EPS and the trends around it on UBA. 1. You would need to add the parameter ?system in the url right before the # values. 2. Once done, proceed to Manage -> Data Sources -> Select Data Source to reveal the real-time EPS and trends associated with it.
Dear Regina, I am more interested in closed cases. Let me explain my view. For eg. I get an issue with JAVA agent working with some different application which is rarely used by customers around the... See more...
Dear Regina, I am more interested in closed cases. Let me explain my view. For eg. I get an issue with JAVA agent working with some different application which is rarely used by customers around the world. And such case was already experienced by some customer and support has provided a solution after lot of troubleshooting and resolved it. Now that ticket was closed back in 2023. It is a collection of brainstorms from experts and a great knowledge base. If the closed cases become volatile with this new migration, then when the similar issue occurs support team, consultants and customers have to again sit for hours to find out the solution.  There are many instances similar to this. It will take years for Cisco to build such a valuable knowledge base again. My humble request is to have the database of older just as reference point instead of deleting it forever. Thanks for considering my request. Jananie
You mentioned in your post you are using UF to send the data. Is the data going from Splunk UF --> Cribl --> Splunk indexer?