All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I'd lilke to create a detailed report with info including the type of forwarder, the average KB/s, the OS, the IP, the splunk version but also with information to which exact index the forwarder... See more...
Hi, I'd lilke to create a detailed report with info including the type of forwarder, the average KB/s, the OS, the IP, the splunk version but also with information to which exact index the forwarder forwards to.  Is it possible to recreate the search from the monitoring console for forwarder instance and use it somehow to connect it to each index?  `dmc_get_forwarder_tcpin` hostname=* | eval source_uri = hostname.":".sourcePort | eval dest_uri = host.":".destPort | eval connection = source_uri."->".dest_uri | stats values(fwdType) as fwdType, values(sourceIp) as sourceIp, latest(version) as version, values(os) as os, values(arch) as arch, dc(dest_uri) as dest_count, dc(connection) as connection_count, avg(tcp_KBps) as avg_tcp_kbps, avg(tcp_eps) as avg_tcp_eps by hostname, guid | eval avg_tcp_kbps = round(avg_tcp_kbps, 2) | eval avg_tcp_eps = round(avg_tcp_eps, 2) | `dmc_rename_forwarder_type(fwdType)` | rename hostname as Instance, fwdType as "Forwarder Type", sourceIp as IP, version as "Splunk Version", os as OS, arch as Architecture, guid as GUID, dest_count as "Receiver Count", connection_count as "Connection Count", avg_tcp_kbps as "Average KB/s", avg_tcp_eps as "Average Events/s"   And probably somehow join it with  | tstats count values(host) AS host WHERE index=* BY index   The issue I see is that it searches dmc_get_forwarder_tcpin which is equal to index=_internal sourcetype=splunkd group=tcpin_connections (connectionType=cooked OR connectionType=cookedSSL) fwdType=* guid=* and I cannot find the indexes there. How can i connect it to each index?
I finally found the way.  To obtain the ID, it is required to launch the "run query" action first. In the action fields, set the email address in the email field and the clean Message ID in the query... See more...
I finally found the way.  To obtain the ID, it is required to launch the "run query" action first. In the action fields, set the email address in the email field and the clean Message ID in the query field. Do not select any other option, nor fill any other field.    In the response you should see another ID base64 like format. This is the ID used to operate emails. Keep in mind that this ID changes everytime you perform any action over the email (moving it to a different folder for instance). Hope this helps.  
Do you have a specific example? I'm looking through the _configtracker index and not seeing any relevant info for savedsearches.conf changes.
Further to @gcusello 's response, the chart command has only three dimensions, the over field, the by field and the aggregation function. Although, strictly speaking, this can be extended by multiple... See more...
Further to @gcusello 's response, the chart command has only three dimensions, the over field, the by field and the aggregation function. Although, strictly speaking, this can be extended by multiple aggregation functions, but you end up with composite column names. As already suggested, concatenating fields is one way to get around this. Another way is perhaps more tricky and involves using the stats command instead and then messing around with the fields to get the by field values represented as field/column names.
Okay, that is what I thought.    Thank you so much for conforming! 
Hi @sumarri, for my knowledge, the only way is concatenating the fields in one field and use it for the chart command. Ciao. Giuseppe
So, I have a chart function that works perfectly! | chart sum(transactionMade) over USERNUMBER by POSTDATE But, I want my chart to have USERNUMBER and USERNAME. They are both correlated, so it shou... See more...
So, I have a chart function that works perfectly! | chart sum(transactionMade) over USERNUMBER by POSTDATE But, I want my chart to have USERNUMBER and USERNAME. They are both correlated, so it should not be an issue. I also want to add Team Number, which there is no correlation to USERNUMBER and USERNAME. Is it possible to have multiple fields after over? I can concatenate all the fields into one string, but it would be easier if they were separate columns. Thank you! 
Hi @PatrikL, you have to list the hosts for each sourcetype and source and then extract datarunning a simple search e.g.: index=winwvwntlog sourcetype=xmlwineventlog source=WinEventLog:Security hos... See more...
Hi @PatrikL, you have to list the hosts for each sourcetype and source and then extract datarunning a simple search e.g.: index=winwvwntlog sourcetype=xmlwineventlog source=WinEventLog:Security host=host1 and then manually load it  (using the Add Data Featrure) and using the above fields. You could eventually save the files using the with the host name as folder and then use an automatic assignment of the host. But anyway, it's a long job. Ciao. Giuseppe
Hi, I am also facing the same issue, could you please help in this?
That monitor stanza name looks OK. I hope the stanza itself contains index= and sourcetype= settings. Perhaps the hostname is not what you expect.  Try this search index=<<index name from inputs.co... See more...
That monitor stanza name looks OK. I hope the stanza itself contains index= and sourcetype= settings. Perhaps the hostname is not what you expect.  Try this search index=<<index name from inputs.conf>> sourcetype=<<sourcetype name from inputs.conf>> source=*printlog_*.log earliest=-1d latest=+1y  Have you confirmed other logs from the same UF are indexed?
@gcusello @kiran_panchavat I have permission on the directory as well. I tried without using crcSalt as well. But no luck was found.
Thanks for the reply, could you please provide an example? I'm not quite understanding what you mean? should I add sourcetype, source and host to the search before export?
Hi @PatrikL, you should extract WinEventLog row data by sourcetype, source and host and then import in the new system manually using these values. Ciao. Giuseppe
Hi @himaniarora20 , answering to your questions: the custom Add-On must be located in every Forwarder. If you have an already configured Deployment Server, you can load it in the DS and deploy it ... See more...
Hi @himaniarora20 , answering to your questions: the custom Add-On must be located in every Forwarder. If you have an already configured Deployment Server, you can load it in the DS and deploy it using the DS, but to be useful, you have also to remove the old conf files from the $SPLUNK_HOME/etc/system/local folder. Otherwise the old conf files will con tinue to have precedence on the new ones. Indexer Discovery, as you can read in the url I shared, must be configured in the outputs.conf file that must be located in the TA_Forwarders Add-On. So it doesn't must be installed on the DS, but deployed to all the Forwarders using the DS. Before starting this job, I hint to follow a training for Splunk Admin or engage a Splunk Admin (better an Architect), to assess your infrastructure, don't start your job without an adequate preparation! Ciao. Giuseppe
Hi it seems that you have some misunderstanding for Splunk deployment architecture. Here is one document which show supported and proposed architectures for Splunk https://www.splunk.com/en_us/pdfs/... See more...
Hi it seems that you have some misunderstanding for Splunk deployment architecture. Here is one document which show supported and proposed architectures for Splunk https://www.splunk.com/en_us/pdfs/tech-brief/splunk-validated-architectures.pdf. As you can see there is no DS between indexers and UFs (or source systems). DS is just management server which define all needed apps (read configurations) which are needed on UF side to collect wanted events/logs/files from source systems. Those are sending all events (preferred) directly to indexers.  @gcusello already told you how this configuration have done on DS side and what you need to do on UFs to get the new configuration in use (remove those from .../system/local/). 1) yes you must configure all those ../system/local if there is those configuration added installation time or later on. If you already use separate app(s) to manage those then it's not needed. Just update those apps as needed and DS update those into UFs. 2) Not all, just those which control DS/DC connection and if there are some additional inputs.conf, props.conf etc which are used to collect application logs from that system. 3) It depends on your environment. If you have static indexers (no additions, changes, deletions) then you can also use those IP/(I prefer) names on outputs.conf. But if your environment is dynamical then definitely you should use that. This needs to install all your UFs (via your dedicate app which define general index / site configuration) and also to all your Splunk infra nodes except indexers itself. r. Ismo
Have you read anything that has been written in this thread? Have you checked what openssl version is used here? (I'm talking about the actual library version, not the filename). How have you "obser... See more...
Have you read anything that has been written in this thread? Have you checked what openssl version is used here? (I'm talking about the actual library version, not the filename). How have you "observed vulnerability"? Again - Nessus "detected" it by checking filename? I'm all for vulnerability scanning but it should be performed properly, not just "run scanner with default settings and assume every finding is a true positive".
Thanks for your reply. But I have a few questions since I am new to this. 1. In which server should I add the custom Add on (Forwarder or DS?) We have hundreds of forwarders pointing to the indexer ... See more...
Thanks for your reply. But I have a few questions since I am new to this. 1. In which server should I add the custom Add on (Forwarder or DS?) We have hundreds of forwarders pointing to the indexer right now. Do we need to change all of them? 2. And since you are saying I shall remove the already existing files in $SPLUNK_HOME/etc/system/local folder, what shall be the contents of the newly added custom add on files? 3. Also, the indexer discovery feature needs to be installed in the DS right? 
Thanks. This seems to work  LINE_BREAKER = (\[[\s\n\r]*\{|\},[\s\n\r]+\{|\}[\s\n\r]*) Why your regex doesn't work? Splunk need only one capture group for line beak.  You have three separate group... See more...
Thanks. This seems to work  LINE_BREAKER = (\[[\s\n\r]*\{|\},[\s\n\r]+\{|\}[\s\n\r]*) Why your regex doesn't work? Splunk need only one capture group for line beak.  You have three separate groups even you have try to make those selectable by |.  You also need to escape some of those marks (like [{]} to recognise as a character). You can test this with https://regex101.com/r/IGQHd7/1 When I test these I use just regex101.com and/or Splunk GUI -> Settings -> Import Data -> Upload with example file on my own laptop/workstation/dev server. In that way it's easy to change those values and check how those are affecting. You should also change MAX_TIMESTAMP_LOOKAHEAD = 20  As you define TIMESTAMP_PREFIX there is no reason to use -1 as its lookahead value. Splunk starts to look it after defined prefix and as you can see correct timestamp is within 20 character after it. Why you have set KV_MODE=json? As you have break this json into separate events, it's not anymore json as a format. Now it's just regular text based event.  
We are currently changing our splunk server to a new one and during the change there was a mix up and we got data sent to the old instance (about 12h worth) which we would like to transfer to our new... See more...
We are currently changing our splunk server to a new one and during the change there was a mix up and we got data sent to the old instance (about 12h worth) which we would like to transfer to our new splunk instance. My thought was to do a search on the old one and then export the results, when I do this as a RAW format and then import it to the new one the data looks good but the field extracts for WinEventLog is not applied as it should (even tho I use the same Event type) how can I solve this? I've also tried to export it as xml, json, csv but the data looks worse than using RAW
From Splunk Support: "It will be resolved in version 9.1.3 and 9.2.1 releases. " As a workaround, you can uninstall the UF and install the new version  instead of upgrading.