All Posts

Top

All Posts

This is obviously a mistake on the docs page (unfortunately dev docs don't include the feedback form). How would you write JS code with Python SDK? It makes no sense.
There's a portal for such feature requests - https://ideas.splunk.com/  
Where did you put your props.conf? (on which component) And what does your ingest process look like? Because that's apparently not data from a windows eventlog input.
What are you using for authentication? If you are using external authentication source (like LDAP or SAML) your users will get re-created as soon as they authenticate using that source.
thanks @KendallW - I think $result.field$ will not work in this scenario? I am already using he subject line as you mentioned but it is having a blank value (in Email I receive) for the variables. 
Hi @Manish_Uniyal24 ,   I would not recommend anything other than Splunk's own Training and Certification Programs. https://www.splunk.com/en_us/training/free-courses/overview.html https://www.s... See more...
Hi @Manish_Uniyal24 ,   I would not recommend anything other than Splunk's own Training and Certification Programs. https://www.splunk.com/en_us/training/free-courses/overview.html https://www.splunk.com/en_us/training/certification.html?locale=en_us This courses starts at "What is Splunk" to very advance level so it should support your needs.   I hope this helps!!! If it does kindly upvote!!!
This is so vague - there can be a gazillion of reasons for you not being able to connect. Does your desktop meet the minimum parameters for Splunk installation? Is the splunkd process running? Do yo... See more...
This is so vague - there can be a gazillion of reasons for you not being able to connect. Does your desktop meet the minimum parameters for Splunk installation? Is the splunkd process running? Do you see errors in $SPLUNK_HOME/var/log/splunk/splunkd.log? Did you change anything in your computer's configuration (most importantly network/firewall settings)?
It doesn't work like that. For TA_auditd to work you ingest contents of /var/log/audit/auditd.log in text form. The settings you're trying to manipulate do completely different things - they tell Sp... See more...
It doesn't work like that. For TA_auditd to work you ingest contents of /var/log/audit/auditd.log in text form. The settings you're trying to manipulate do completely different things - they tell Splunk how to _interpret_ the received data. You can't use them to make json from plain text or something like that.
please how do i hashout as proposed by the solution  
This is a 7 years old thread. You'd get much more visibility if you posted your question as a new thread (possibly dropping in a link to this thread for reference if it's relevant to your case).
I really appreciate your advices, Thank you for discussion
Sure. Whatever rocks your boat But seriously - it's like ITIL - adopt and adapt. If something works for you and you are aware of your approach's limitations - go ahead.  
For this situation, we have a weekly alert that shows "missing hosts" | tstats latest(_time) as latest where NOT index=main AND NOT index="*-summary" earliest=-30d by index, host | eval DeltaSeconds... See more...
For this situation, we have a weekly alert that shows "missing hosts" | tstats latest(_time) as latest where NOT index=main AND NOT index="*-summary" earliest=-30d by index, host | eval DeltaSeconds = now() - latest | where DeltaSeconds>604800 | eval LastEventTime = strftime(latest,"%Y-%m-%d %H:%M:%S") | eval DeltaHours = round(DeltaSeconds/3600) | eval DeltaDays = round(DeltaHours/24) | join index [| inputlookup generated_file_with_admins_mails.csv] | table index, host, LastEventTime, DeltaHours, DeltaDays, email_to Using the sendresults app, this Splunk alerts the responsible employee(s) about these hosts. Now this search shows only hosts that haven't sent Syslog for more than 7 days and that's OK for us In most cases, this alert shows only hosts that we removed from our infrastructure But if it will be necessary, I can run this alert more frequently or separate it into several searches with different "missing" conditions I understand that this approach cannot handle, for example, some intermittent network or software lags, but I have used this approach for about a year and all is quite fine, excluding some rare cases (like this topic)
While the general question is of course valid and needs to be considered properly, I saw similar cases in my experience - splitting data from a single source into separate indexes. The most typical ... See more...
While the general question is of course valid and needs to be considered properly, I saw similar cases in my experience - splitting data from a single source into separate indexes. The most typical case is when you have a single solution providing logs for separate business entities (like a central security appliance protecting multiple divisions or even companies from a single business group). You might want to split events so that each unit has access only to its own events (possibly with some overseeing security team having access to all those indexes). So there are valid use cases for similar setups
Hello richgalloway I've checked for the splunkd.log for the past 1 week - no errors found. 8080 is closed even on the old clusters but we never had troubles with Replication and Search Factor.  98... See more...
Hello richgalloway I've checked for the splunkd.log for the past 1 week - no errors found. 8080 is closed even on the old clusters but we never had troubles with Replication and Search Factor.  9887 and 8089 ports are all open across all the clusters. But still the fixup tasks pending - 301 Fixup tasks - In progress - 0
@anooshac  Can you please share your full sample code? KV
Your data illustration strongly suggest that it is part of a JSON event like,     {"message":"sypher:[tokenized] build successful -\xxxxy {\"data\":{\"account_id\":\"ABC123XYZ\",\"activity\":{\"ti... See more...
Your data illustration strongly suggest that it is part of a JSON event like,     {"message":"sypher:[tokenized] build successful -\xxxxy {\"data\":{\"account_id\":\"ABC123XYZ\",\"activity\":{\"time\":\"2024-05-31T12:37:25Z\"}}", "some_field":"somevalue", "some_other_field": "morevalue"}     In this case, Splunk should have given you a field named "message"  that has this value:      "message":"sypher:[tokenized] build successful -\xxxxy {\"data\":{\"account_id\":\"ABC123XYZ\",\"activity\":{\"time\":\"2024-05-31T12:37:25Z\"}}"     What the developer is trying to do is to embed more data in this field, partially also in JSON.  For long-term maintainability, it is best not to treat that as text, either.  This means that regex is not the right tool for the job.  Instead,  try to get the embedded JSON first. There is just one problem (in addition to missing a closing double quote for the time value): the string \xxxxy is illegal in JSON.  If this is the real data, Splunk would have bailed and NOT give you a field named "message".  In that case, you will have to deal with that first.  Let's explore how later. For now, suppose your data is actually   {"message":"sypher:[tokenized] build successful -\\\xxxxy {\"data\":{\"account_id\":\"ABC123XYZ\",\"activity\":{\"time\":\"2024-05-31T12:37:25Z\"}}", "some_field":"somevalue", "some_other_field": "morevalue"}   As such, Splunk would have given you a value for message like this:   sypher:[tokenized] build successful -\xxxxy {"data":{"account_id":"ABC123XYZ","activity":{"time":"2024-05-31T12:37:25Z"}}   Consequently, all you need to do is   | eval jmessage = replace(message, "^[^{]+", "") | spath input=jmessage   You will get the following fields data.account_id data.activity.time some_field some_other_field ABC123XYZ 2024-05-31T12:37:25Z somevalue morevalue Here is an emulation of the "correct" data you can play with and compare with real data   | makeresults | eval _raw = "{\"message\":\"sypher:[tokenized] build successful -\\\xxxxy {\\\"data\\\":{\\\"account_id\\\":\\\"ABC123XYZ\\\",\\\"activity\\\":{\\\"time\\\":\\\"2024-05-31T12:37:25Z\\\"}}\", \"some_field\":\"somevalue\", \"some_other_field\": \"morevalue\"}" | spath ``` data emulation above ```   Now, if your raw data indeed contains \xxxxy inside a JSON block, you can still rectify that with text manipulation so you get a legal JSON.  But you have to tell your developer that they are logging bad JSON. (Recently there was a case where an IBM mainframe plugin sent Splunk bad data like this.  It is best for the developer to fix this kind of problem.)
Hi @Splunk-Star , Let me understand: this is one of your events, if you have many events, they are displayed, if you have only one event, it isn't displayed, is it correct? have you this issu... See more...
Hi @Splunk-Star , Let me understand: this is one of your events, if you have many events, they are displayed, if you have only one event, it isn't displayed, is it correct? have you this issue only with these logs or also with other logs? Maybe the issue is related to the lenght of the log, I encountered an issue with very long logs, that were displayed with a very long delay for their dimension. Did you tried to truncate it e.g. with substr: index = "*" "a39d0417-bc8e-41fd-ae1f-7ed5566caed6" "*uploadstat*" status=Processed | eval _raw=substr(_raw,100) Ciao. Giuseppe  
Hi @iam_ironman , only one question: why? indexes aren't database tables, indexes are containers where logs are stored, the log categorization is done with sourcetype field. usually custom indexes... See more...
Hi @iam_ironman , only one question: why? indexes aren't database tables, indexes are containers where logs are stored, the log categorization is done with sourcetype field. usually custom indexes are mainly created when there are different requirements about retention and grant accesses and secondary for different log volumes. So why do you want to create so many indexes, that you have to maintain and that after a retention time, will be empty? Enyway, the rex you used is wrong, you don't need to extract the index field to assign a dinamic value to this field, you have to identify a group and use it for the index value: [new_index] SOURCE_KEY = MetaData:Source REGEX = ^(\w+)\-\d+ FORMAT = $1 DEST_KEY = _MetaData:Index Ciao. Giuseppe
I have a set of DCs from where i need to monitor the Device logs which is located in a shared path.. I tried entering the below stanzas for each server and DC separately which worked. But when I am ... See more...
I have a set of DCs from where i need to monitor the Device logs which is located in a shared path.. I tried entering the below stanzas for each server and DC separately which worked. But when I am trying to standardise this monitoring with a pattern to avoid pushing the configs each time, it did not work. Can you let me know where its going wrong?? [monitor://\\azwvocasp00005\PRDC_DeviceLogs\DeviceLogs] disabled = 0 recursive = true sourcetype = Vocollect:DeviceLog index = rpl_winos_application_prod Now am trying: [monitor://\\azwvocasp000*\*DC_DeviceLogs\DeviceLogs] disabled = 0 recursive = true sourcetype = Vocollect:DeviceLog index = rpl_winos_application_prod Thanks in Advance