All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Charts have numeric scales for the y-axis except things like bubble charts but then the values are numeric so it is unlikely that you can get a chart as you proposed - what are you trying to show (th... See more...
Charts have numeric scales for the y-axis except things like bubble charts but then the values are numeric so it is unlikely that you can get a chart as you proposed - what are you trying to show (there may be alternative ways of representing the data)
Hi all, I am having two fields as eventfield2and eventfield3with values of eventfield3= LHCP , RHCP ,LHCP & values of eventfield2= RHCP , RHCP ,LHCP . I want a result like as shown .          T... See more...
Hi all, I am having two fields as eventfield2and eventfield3with values of eventfield3= LHCP , RHCP ,LHCP & values of eventfield2= RHCP , RHCP ,LHCP . I want a result like as shown .          Thanks for your time in advance.      
Up A week ago, I tried to enable DEBUG log to find the root case But found only the similar events without anything helpful to find the root case
@doeh  - You don't need to ingest the logs, just directly modify the lookup but with the help of rest endpoints instead of modifying file. The below document has methods that you can use. https://do... See more...
@doeh  - You don't need to ingest the logs, just directly modify the lookup but with the help of rest endpoints instead of modifying file. The below document has methods that you can use. https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTknowledge#data.2Flookup-table-files.2F.7Bname.7D   I cannot tell what change has happen after Upgrade, but what I can certainly tell you is direct file modification is not recommended practice and it will not work in Search Head Cluster for sure. So, its a good idea to switch to better approach.   I hope this helps! Kindly upvote if it does!!!
I just found it in a few files inside the : ./apps/splunk_monitoring_console/lookups/hwf-list.csv ./apps/splunk_monitoring_console/lookups/dmc_forwarder_assets.csv ./apps/splunk_monitoring_console... See more...
I just found it in a few files inside the : ./apps/splunk_monitoring_console/lookups/hwf-list.csv ./apps/splunk_monitoring_console/lookups/dmc_forwarder_assets.csv ./apps/splunk_monitoring_console/lookups/dmc_forwarder_assets.csv.c didn't removed them yet. The fact is we are going to rebuild the dmc/lm in a matter of weeks and wil see if these errors wil appear again. But i think they won't appear again. Until now it doesn't seem to matter, it all works great. grts jari
Hi @adoumbia , as @ITWhisperer said, it's really difficoult to help you without knowing the events to apply the search. Anyway, if you need a brute force attack sample search, you can see in the Sp... See more...
Hi @adoumbia , as @ITWhisperer said, it's really difficoult to help you without knowing the events to apply the search. Anyway, if you need a brute force attack sample search, you can see in the Splunk Security Essentials App ( https://splunkbase.splunk.com/app/3435 ) where you can find what you're searching and many other Security Use Cases. Ciao. Giuseppe
I tried the suggestions above. The SPL against the _internal index doesn't show modifications to dashboards. The SPL against the _audit index does but it shows a numeric ID for the user which I belie... See more...
I tried the suggestions above. The SPL against the _internal index doesn't show modifications to dashboards. The SPL against the _audit index does but it shows a numeric ID for the user which I believe to be unrelated to the actual user. I say this as because this same ID is responsible for 99% of action=modify events across the platform. So I would presume it to be the splunk system user.
It is the size of evtx files on disk. I have confirmed I have not reached the limit. Size after indexing is much below than the size on disk as it is not loading all the files.
OK. I re-read your config and there is more going on underneath than meets the eye. Firstly, Splunk index size management is not a precise thing. Secondly, the bucket rolling happens on specific co... See more...
OK. I re-read your config and there is more going on underneath than meets the eye. Firstly, Splunk index size management is not a precise thing. Secondly, the bucket rolling happens on specific conditions. Most importantly - hot buckets do _not_ roll because of size restrictions on the index. So hot buckets will roll only when they meet hot bucket rolling criteria (too big bucket, too long idle period and such). So if you have a metric index, your buckets will probably grow to the maximum permissible size (which in your case is 1GB per bucket) and then some (Splunk adds some stuff when it closes the bucket and so on). Then it will get rolled to warm. And then it can pretty immediately get rolled directly to frozen if needed but no earlier. Thirdly, you have maxHotBuckets=auto and you didn't redefine metric.maxHotBuckets, which by default is also "auto". That means Splunk will create 6 hot buckets for that index. So your index will happily grow to at least 6GB regardless of your maxTotalDataSizeMB. If you're way over that value (like you've reached some 20 or 30GB), it might be worth troubleshooting it with support since generally it shouldn't happen. Oh, and hot buckets roll on indexer restart so it's natural that when you restart your indexer your disk usage goes down.
Hi @MMMM, Is that the size of the evtx files on disk or the size of the events after they're indexed? Have you confirmed you haven't reached the Splunk Free license limit?
Hi tscroggings, Thanks a lot for replying and sorry for not mentioning the size of data. The size of data has always been below 100MB. 
This thread is more than two years old with an accepted answer.  It's also on a completely different topic.  For a better chance at having people see and respond to your request, please post a new q... See more...
This thread is more than two years old with an accepted answer.  It's also on a completely different topic.  For a better chance at having people see and respond to your request, please post a new question.
Hi @kbrisson, Yes, it's possible, although the "how" is a long answer, and I don't have any active Tenable.sc or Tenable.io data to demo with. A few key points to remember: Tenable data is relatio... See more...
Hi @kbrisson, Yes, it's possible, although the "how" is a long answer, and I don't have any active Tenable.sc or Tenable.io data to demo with. A few key points to remember: Tenable data is relational, but the Splunk data will be a point-in-time snapshot of assets and scan results represented as a time series. Each query returns the latest scan results from all repositories the configured account can access. You'll need to deduplicate assets and vulns using time ranges that cover the span of first seen and last seen timestamps for the assets and vulns of interest. UUIDs may be globally unique, but if you have multiple repositories and/or multiple Tenable instances, you'll need to deduplicate by Tenable instance, repository, and UUID*. * UUID isn't the only field used to uniquely identify assets. Check the uniqueness/hostUniqueness field to see which fields create a composite key that uniquely identifies a host. Some apps, e.g. Tenable's, attempt to work around these issues by storing data in a kvstore collection; however, the collection can grow quite large, limiting its usefulness as a search tool. It doesn't scale. You may have better luck defining reports in Tenable and pulling the report results into Splunk.
I'm sure this has been asked before but can't find the answer. I'm looking to use SPLUNK to provide better metrics from Tenable. The data that is sent into SPLUNK from tenable has two source types th... See more...
I'm sure this has been asked before but can't find the answer. I'm looking to use SPLUNK to provide better metrics from Tenable. The data that is sent into SPLUNK from tenable has two source types that I'm interested in. Asset data and vuln data - I need to combine the two of them (UUID is the common field) so that I can then filter the data set down to specific tags that have been applied to the assets. This way, I can start creating better historical dashboards and reports.  I think what I need to do, is match the UUID's from both SourceTypes, which hopefully will then take all the vuln data and list it under the one unique UUID. From there, I need to be able to filter based on the tags created in tenable. Is this possible? Thanks
Hi @MMMM, Splunk Free is limited to 500 MB of ingest per day. How large are the indexed events? You can check for license alerts under Settings > Licensing, although an alert should also appear und... See more...
Hi @MMMM, Splunk Free is limited to 500 MB of ingest per day. How large are the indexed events? You can check for license alerts under Settings > Licensing, although an alert should also appear under Messages. You can run a simple search to see daily usage over time: | tstats sum(PREFIX(b=)) as bytes where index=_internal source=*license_usage.log* TERM(type=RolloverSummary) earliest=-7d@d latest=now by _time span=1d | eval MB=round(bytes/1024/1024) If your daily usage is over 500 MB, Splunk Free will stop indexing new data, i.e. your evtx files, when the limit is reached.
Using transaction is rarely a good solution, as it has numerous limitations and results will silently disappear, as you have noticed. It seems you're looking for the same msg within a 5 minute windo... See more...
Using transaction is rarely a good solution, as it has numerous limitations and results will silently disappear, as you have noticed. It seems you're looking for the same msg within a 5 minute window, that has a syscall and not from certain comm types, but given that audit messages are typically time based, can you elaborate on what you're trying to do here. You are asking Splunk to hold 5 minutes of data in memory for every msg combination, so if your data volume is large then lots of those combinations will get discarded. Whenever you use transaction, you should filter out as much data as possible before you use it. Can you give an example of what groups of events you are trying to collect together - the stats command is generally a much better way of doing this task and does not have limitations. Also, note that sort by date is not valid SPL as "by" is treated here as a field and not a command word - just use sort date    
If you want to add these fields to a table you are creating but don't know what the fields are called, then you can use @ITWhisperer technique, but change it slightly so that it is ... | eval cust_f... See more...
If you want to add these fields to a table you are creating but don't know what the fields are called, then you can use @ITWhisperer technique, but change it slightly so that it is ... | eval cust_field_{name}=value | table fields_you_want cust_field_* | rename cust_field_* as * which will effectively give you cust_field_fieldA and so on with that consistent prefix, then you can use the table statement to table out the fields you want and all those custom fields and then use wildcard rename to get rid of the prefix.  
Please share some sample anonymised events so that we can see what you are dealing with. Please explain which parts of the events are important for what you are trying to discover. Please share what ... See more...
Please share some sample anonymised events so that we can see what you are dealing with. Please explain which parts of the events are important for what you are trying to discover. Please share what you would like the results to look like. Without this type of information, we are reduced to attempting to read your mind (and my mind-reading license has been revoked after the unfortunate incident with the estate agent!)
Hi Paul Thanks for the information. However, the below heading it's not actually a single table it's a markdown text. The below highlighted are individual markdown text   Where i don't f... See more...
Hi Paul Thanks for the information. However, the below heading it's not actually a single table it's a markdown text. The below highlighted are individual markdown text   Where i don't find any option       
i want to find out which IP address, hostname or username has failed multiple time to login to multiple accounts. I am trying to detect brute force attack.