All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi tscroggings, Thanks a lot for replying and sorry for not mentioning the size of data. The size of data has always been below 100MB. 
This thread is more than two years old with an accepted answer.  It's also on a completely different topic.  For a better chance at having people see and respond to your request, please post a new q... See more...
This thread is more than two years old with an accepted answer.  It's also on a completely different topic.  For a better chance at having people see and respond to your request, please post a new question.
Hi @kbrisson, Yes, it's possible, although the "how" is a long answer, and I don't have any active Tenable.sc or Tenable.io data to demo with. A few key points to remember: Tenable data is relatio... See more...
Hi @kbrisson, Yes, it's possible, although the "how" is a long answer, and I don't have any active Tenable.sc or Tenable.io data to demo with. A few key points to remember: Tenable data is relational, but the Splunk data will be a point-in-time snapshot of assets and scan results represented as a time series. Each query returns the latest scan results from all repositories the configured account can access. You'll need to deduplicate assets and vulns using time ranges that cover the span of first seen and last seen timestamps for the assets and vulns of interest. UUIDs may be globally unique, but if you have multiple repositories and/or multiple Tenable instances, you'll need to deduplicate by Tenable instance, repository, and UUID*. * UUID isn't the only field used to uniquely identify assets. Check the uniqueness/hostUniqueness field to see which fields create a composite key that uniquely identifies a host. Some apps, e.g. Tenable's, attempt to work around these issues by storing data in a kvstore collection; however, the collection can grow quite large, limiting its usefulness as a search tool. It doesn't scale. You may have better luck defining reports in Tenable and pulling the report results into Splunk.
I'm sure this has been asked before but can't find the answer. I'm looking to use SPLUNK to provide better metrics from Tenable. The data that is sent into SPLUNK from tenable has two source types th... See more...
I'm sure this has been asked before but can't find the answer. I'm looking to use SPLUNK to provide better metrics from Tenable. The data that is sent into SPLUNK from tenable has two source types that I'm interested in. Asset data and vuln data - I need to combine the two of them (UUID is the common field) so that I can then filter the data set down to specific tags that have been applied to the assets. This way, I can start creating better historical dashboards and reports.  I think what I need to do, is match the UUID's from both SourceTypes, which hopefully will then take all the vuln data and list it under the one unique UUID. From there, I need to be able to filter based on the tags created in tenable. Is this possible? Thanks
Hi @MMMM, Splunk Free is limited to 500 MB of ingest per day. How large are the indexed events? You can check for license alerts under Settings > Licensing, although an alert should also appear und... See more...
Hi @MMMM, Splunk Free is limited to 500 MB of ingest per day. How large are the indexed events? You can check for license alerts under Settings > Licensing, although an alert should also appear under Messages. You can run a simple search to see daily usage over time: | tstats sum(PREFIX(b=)) as bytes where index=_internal source=*license_usage.log* TERM(type=RolloverSummary) earliest=-7d@d latest=now by _time span=1d | eval MB=round(bytes/1024/1024) If your daily usage is over 500 MB, Splunk Free will stop indexing new data, i.e. your evtx files, when the limit is reached.
Using transaction is rarely a good solution, as it has numerous limitations and results will silently disappear, as you have noticed. It seems you're looking for the same msg within a 5 minute windo... See more...
Using transaction is rarely a good solution, as it has numerous limitations and results will silently disappear, as you have noticed. It seems you're looking for the same msg within a 5 minute window, that has a syscall and not from certain comm types, but given that audit messages are typically time based, can you elaborate on what you're trying to do here. You are asking Splunk to hold 5 minutes of data in memory for every msg combination, so if your data volume is large then lots of those combinations will get discarded. Whenever you use transaction, you should filter out as much data as possible before you use it. Can you give an example of what groups of events you are trying to collect together - the stats command is generally a much better way of doing this task and does not have limitations. Also, note that sort by date is not valid SPL as "by" is treated here as a field and not a command word - just use sort date    
If you want to add these fields to a table you are creating but don't know what the fields are called, then you can use @ITWhisperer technique, but change it slightly so that it is ... | eval cust_f... See more...
If you want to add these fields to a table you are creating but don't know what the fields are called, then you can use @ITWhisperer technique, but change it slightly so that it is ... | eval cust_field_{name}=value | table fields_you_want cust_field_* | rename cust_field_* as * which will effectively give you cust_field_fieldA and so on with that consistent prefix, then you can use the table statement to table out the fields you want and all those custom fields and then use wildcard rename to get rid of the prefix.  
Please share some sample anonymised events so that we can see what you are dealing with. Please explain which parts of the events are important for what you are trying to discover. Please share what ... See more...
Please share some sample anonymised events so that we can see what you are dealing with. Please explain which parts of the events are important for what you are trying to discover. Please share what you would like the results to look like. Without this type of information, we are reduced to attempting to read your mind (and my mind-reading license has been revoked after the unfortunate incident with the estate agent!)
Hi Paul Thanks for the information. However, the below heading it's not actually a single table it's a markdown text. The below highlighted are individual markdown text   Where i don't f... See more...
Hi Paul Thanks for the information. However, the below heading it's not actually a single table it's a markdown text. The below highlighted are individual markdown text   Where i don't find any option       
i want to find out which IP address, hostname or username has failed multiple time to login to multiple accounts. I am trying to detect brute force attack.
Please share some sample anonymised events so that we can see what you are dealing with. Please explain which parts of the events are important for what you are trying to discover. Please share what ... See more...
Please share some sample anonymised events so that we can see what you are dealing with. Please explain which parts of the events are important for what you are trying to discover. Please share what you would like the results to look like. Without this type of information, we are reduced to attempting to read your mind (and my mind-reading license has been revoked after the unfortunate incident with the estate agent!)
Please share some raw anonymised events so we can see what you are dealing with so we can try and help you further. Please use the code block </> above to preserve the format of the events so that we... See more...
Please share some raw anonymised events so we can see what you are dealing with so we can try and help you further. Please use the code block </> above to preserve the format of the events so that we can suggest the correct field extractions for you.
I am trying to write an spl query to detect an event of a single source IP address  or a user fails multiple time to login to multiple accounts. can anyone help me write it.
@corti77 Just curious, have you set up the API calls to Netapp using the Splunk_TA_ontap and SA-Hydra app? We are setting it up currently and have been running into an issue we can't resolve and can ... See more...
@corti77 Just curious, have you set up the API calls to Netapp using the Splunk_TA_ontap and SA-Hydra app? We are setting it up currently and have been running into an issue we can't resolve and can not find much help online.
Is there any way to toggle the data points on and off via a radio button added to a dashboard? When doing line charts with long lengths of data, the numbers get tightly put together overlapping. ... See more...
Is there any way to toggle the data points on and off via a radio button added to a dashboard? When doing line charts with long lengths of data, the numbers get tightly put together overlapping.
I am trying to write an spl query to detect an event of a single source IP address  or user a fail multiple time to login to mutiple account. can anyone help me write it.
Our Nessus vulnerability scanner is flagging that the server_pkcs1.pem certificate is expired. I have verified that it is expired but unable to renew it.   Stopping service, renaming file and restar... See more...
Our Nessus vulnerability scanner is flagging that the server_pkcs1.pem certificate is expired. I have verified that it is expired but unable to renew it.   Stopping service, renaming file and restarting service does not recreate it.  How do you renew this certificate?
this is awesome, but is there a way to make the results columns (additional fields on my results)
This is better option. You should remember that when you configure two outputs on splunk and when one of them stalls then also other stops quite soon.
If you want it, you can vote up my proposal on Splunk Ideas. https://ideas.splunk.com/ideas/EID-I-2441