All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

So, I have a chart function that works perfectly! | chart sum(transactionMade) over USERNUMBER by POSTDATE But, I want my chart to have USERNUMBER and USERNAME. They are both correlated, so it shou... See more...
So, I have a chart function that works perfectly! | chart sum(transactionMade) over USERNUMBER by POSTDATE But, I want my chart to have USERNUMBER and USERNAME. They are both correlated, so it should not be an issue. I also want to add Team Number, which there is no correlation to USERNUMBER and USERNAME. Is it possible to have multiple fields after over? I can concatenate all the fields into one string, but it would be easier if they were separate columns. Thank you! 
Hi @PatrikL, you have to list the hosts for each sourcetype and source and then extract datarunning a simple search e.g.: index=winwvwntlog sourcetype=xmlwineventlog source=WinEventLog:Security hos... See more...
Hi @PatrikL, you have to list the hosts for each sourcetype and source and then extract datarunning a simple search e.g.: index=winwvwntlog sourcetype=xmlwineventlog source=WinEventLog:Security host=host1 and then manually load it  (using the Add Data Featrure) and using the above fields. You could eventually save the files using the with the host name as folder and then use an automatic assignment of the host. But anyway, it's a long job. Ciao. Giuseppe
Hi, I am also facing the same issue, could you please help in this?
That monitor stanza name looks OK. I hope the stanza itself contains index= and sourcetype= settings. Perhaps the hostname is not what you expect.  Try this search index=<<index name from inputs.co... See more...
That monitor stanza name looks OK. I hope the stanza itself contains index= and sourcetype= settings. Perhaps the hostname is not what you expect.  Try this search index=<<index name from inputs.conf>> sourcetype=<<sourcetype name from inputs.conf>> source=*printlog_*.log earliest=-1d latest=+1y  Have you confirmed other logs from the same UF are indexed?
@gcusello @kiran_panchavat I have permission on the directory as well. I tried without using crcSalt as well. But no luck was found.
Thanks for the reply, could you please provide an example? I'm not quite understanding what you mean? should I add sourcetype, source and host to the search before export?
Hi @PatrikL, you should extract WinEventLog row data by sourcetype, source and host and then import in the new system manually using these values. Ciao. Giuseppe
Hi @himaniarora20 , answering to your questions: the custom Add-On must be located in every Forwarder. If you have an already configured Deployment Server, you can load it in the DS and deploy it ... See more...
Hi @himaniarora20 , answering to your questions: the custom Add-On must be located in every Forwarder. If you have an already configured Deployment Server, you can load it in the DS and deploy it using the DS, but to be useful, you have also to remove the old conf files from the $SPLUNK_HOME/etc/system/local folder. Otherwise the old conf files will con tinue to have precedence on the new ones. Indexer Discovery, as you can read in the url I shared, must be configured in the outputs.conf file that must be located in the TA_Forwarders Add-On. So it doesn't must be installed on the DS, but deployed to all the Forwarders using the DS. Before starting this job, I hint to follow a training for Splunk Admin or engage a Splunk Admin (better an Architect), to assess your infrastructure, don't start your job without an adequate preparation! Ciao. Giuseppe
Hi it seems that you have some misunderstanding for Splunk deployment architecture. Here is one document which show supported and proposed architectures for Splunk https://www.splunk.com/en_us/pdfs/... See more...
Hi it seems that you have some misunderstanding for Splunk deployment architecture. Here is one document which show supported and proposed architectures for Splunk https://www.splunk.com/en_us/pdfs/tech-brief/splunk-validated-architectures.pdf. As you can see there is no DS between indexers and UFs (or source systems). DS is just management server which define all needed apps (read configurations) which are needed on UF side to collect wanted events/logs/files from source systems. Those are sending all events (preferred) directly to indexers.  @gcusello already told you how this configuration have done on DS side and what you need to do on UFs to get the new configuration in use (remove those from .../system/local/). 1) yes you must configure all those ../system/local if there is those configuration added installation time or later on. If you already use separate app(s) to manage those then it's not needed. Just update those apps as needed and DS update those into UFs. 2) Not all, just those which control DS/DC connection and if there are some additional inputs.conf, props.conf etc which are used to collect application logs from that system. 3) It depends on your environment. If you have static indexers (no additions, changes, deletions) then you can also use those IP/(I prefer) names on outputs.conf. But if your environment is dynamical then definitely you should use that. This needs to install all your UFs (via your dedicate app which define general index / site configuration) and also to all your Splunk infra nodes except indexers itself. r. Ismo
Have you read anything that has been written in this thread? Have you checked what openssl version is used here? (I'm talking about the actual library version, not the filename). How have you "obser... See more...
Have you read anything that has been written in this thread? Have you checked what openssl version is used here? (I'm talking about the actual library version, not the filename). How have you "observed vulnerability"? Again - Nessus "detected" it by checking filename? I'm all for vulnerability scanning but it should be performed properly, not just "run scanner with default settings and assume every finding is a true positive".
Thanks for your reply. But I have a few questions since I am new to this. 1. In which server should I add the custom Add on (Forwarder or DS?) We have hundreds of forwarders pointing to the indexer ... See more...
Thanks for your reply. But I have a few questions since I am new to this. 1. In which server should I add the custom Add on (Forwarder or DS?) We have hundreds of forwarders pointing to the indexer right now. Do we need to change all of them? 2. And since you are saying I shall remove the already existing files in $SPLUNK_HOME/etc/system/local folder, what shall be the contents of the newly added custom add on files? 3. Also, the indexer discovery feature needs to be installed in the DS right? 
Thanks. This seems to work  LINE_BREAKER = (\[[\s\n\r]*\{|\},[\s\n\r]+\{|\}[\s\n\r]*) Why your regex doesn't work? Splunk need only one capture group for line beak.  You have three separate group... See more...
Thanks. This seems to work  LINE_BREAKER = (\[[\s\n\r]*\{|\},[\s\n\r]+\{|\}[\s\n\r]*) Why your regex doesn't work? Splunk need only one capture group for line beak.  You have three separate groups even you have try to make those selectable by |.  You also need to escape some of those marks (like [{]} to recognise as a character). You can test this with https://regex101.com/r/IGQHd7/1 When I test these I use just regex101.com and/or Splunk GUI -> Settings -> Import Data -> Upload with example file on my own laptop/workstation/dev server. In that way it's easy to change those values and check how those are affecting. You should also change MAX_TIMESTAMP_LOOKAHEAD = 20  As you define TIMESTAMP_PREFIX there is no reason to use -1 as its lookahead value. Splunk starts to look it after defined prefix and as you can see correct timestamp is within 20 character after it. Why you have set KV_MODE=json? As you have break this json into separate events, it's not anymore json as a format. Now it's just regular text based event.  
We are currently changing our splunk server to a new one and during the change there was a mix up and we got data sent to the old instance (about 12h worth) which we would like to transfer to our new... See more...
We are currently changing our splunk server to a new one and during the change there was a mix up and we got data sent to the old instance (about 12h worth) which we would like to transfer to our new splunk instance. My thought was to do a search on the old one and then export the results, when I do this as a RAW format and then import it to the new one the data looks good but the field extracts for WinEventLog is not applied as it should (even tho I use the same Event type) how can I solve this? I've also tried to export it as xml, json, csv but the data looks worse than using RAW
From Splunk Support: "It will be resolved in version 9.1.3 and 9.2.1 releases. " As a workaround, you can uninstall the UF and install the new version  instead of upgrading.
How to Create Dataset inside DataModel and add new Fields in Dataset using Splunk SDK for Java
I'm having the same problem on our DC's. Did you find a solution?
We need to monitor Azure API Management self-hosted gateway and get all the traces. The gateway is AKS container with image mcr.microsoft.com/azure-api-management/gateway:v2. Inside is .NET applicati... See more...
We need to monitor Azure API Management self-hosted gateway and get all the traces. The gateway is AKS container with image mcr.microsoft.com/azure-api-management/gateway:v2. Inside is .NET application with  /app $ dotnet --info Host: Version: 6.0.26 Architecture: x64 Commit: dc45e96840 .NET runtimes installed: Microsoft.AspNetCore.App 6.0.26 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App] Microsoft.NETCore.App 6.0.26 [/usr/share/dotnet/shared/Microsoft.NETCore.App] We would expect to monitor it the same way as any other .NET application, but we dont catch any BT or traces. Also we inject following env - name: CORECLR_PROFILER value: "{57e1aa68-2229-41aa-9931-a6e93bbc64d8}" - name: CORECLR_ENABLE_PROFILING value: "1" - name: CORECLR_PROFILER_PATH value: "/opt/appdynamics-dotnetcore/libappdprofiler.so" - name: LD_DEBUG value: all - name: LD_LIBRARY_PATH value: /opt/appdynamics-dotnetcore/dotnet - name: IIS_VIRTUAL_APPLICATION_PATH value: "/" Please help.
yes, still we observed vulnerability  openssl libraries files having 1.0.2zi FIPS with latest SplunkForwarder 9.2.0.1 as below. # cat /opt/splunkforwarder/etc/splunk.version VERSION=9.2.0.1 BUILD=... See more...
yes, still we observed vulnerability  openssl libraries files having 1.0.2zi FIPS with latest SplunkForwarder 9.2.0.1 as below. # cat /opt/splunkforwarder/etc/splunk.version VERSION=9.2.0.1 BUILD=d8ae995bf219 PRODUCT=splunk PLATFORM=Linux-x86_64 Library files r-xr-xr-x. 1 splunk splunk 475784 Feb 7 00:48 libssl.so.1.0.0 r-xr-xr-x. 1 splunk splunk 2996816 Feb 7 00:48 libcrypto.so.1.0.0 How to mitigate this vulnerability ?
worked for me - but - surely this is something that should not happen- there are no warnings in Splunk it just bang - Splunk is down in production  
@uagraw01 Hello, All files with the.xml extension, such as /scada_server/walmart_1.xml, /scada_server/walmart_2.xml, /scada_server/walmart_3.xml, and so forth, are matched by /walmart_*.xml. Could yo... See more...
@uagraw01 Hello, All files with the.xml extension, such as /scada_server/walmart_1.xml, /scada_server/walmart_2.xml, /scada_server/walmart_3.xml, and so forth, are matched by /walmart_*.xml. Could you please verify the permissions for every file inside this directory?And also,  You can try to remove the CrCSalt and try.  Check the below document for more examples:  https://docs.splunk.com/Documentation/Splunk/latest/Data/Specifyinputpathswithwildcards