All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers, I need help on how to sort this multi-value fields based on the latest timestamp and status. Here's my dummy query for this. | makeresults | eval hostname = "server101" | eval ... See more...
Hi Splunkers, I need help on how to sort this multi-value fields based on the latest timestamp and status. Here's my dummy query for this. | makeresults | eval hostname = "server101" | eval id = "123|124" | eval database_timestamp = "Mar 03, 2022 12:59:46 PM|Feb 23, 2022 1:19:24 PM" | eval database_status = "Online|Offline (30 days ago)" | eval server_timestamp = "Feb 22, 2022 1:19:24 PM|Mar 01, 2022 12:59:46 PM" | eval server_status = "Offline (31 days ago)|Online" | fields hostname id database_timestamp database_status server_timestamp server_status | makemv delim="|" database_timestamp | makemv delim="|" database_status | makemv delim="|" server_timestamp | makemv delim="|" server_status | makemv delim="|" id Below is the sample output and expected output. Current Output:                   hostname database_timestamp database_status server_timestamp server_timestamp server101 Mar 03, 2022 12:59:46 PM Feb 23, 2022 1:19:24 PM Online Offline (30 days ago) Feb 22, 2022 1:19:24 PM Mar 01, 2022 12:59:46 PM Offline (31 days ago) Online           Expected Output:                   hostname database_timestamp database_status server_timestamp server_status server101 Mar 03, 2022 12:59:46 PM Feb 23, 2022 1:19:24 PM Online Offline (30 days ago) Mar 01, 2022 12:59:46 PM Feb 22, 2022 1:19:24 PM Online Offline (31 days ago)  
Hello, We are using Okta Identity Add on ( https://splunkbase.splunk.com/app/3682/#/details ) for about 5+ months now.   We discovered that the user import has not been able to fetch all the user ac... See more...
Hello, We are using Okta Identity Add on ( https://splunkbase.splunk.com/app/3682/#/details ) for about 5+ months now.   We discovered that the user import has not been able to fetch all the user accounts.   As per the product documentation,  the users input job imports all the user account  in its 1st run and thereafter in subsequent runs, it only brings in the users who have been modified or changed.   But in our case, we are seeing that even the 1st run did not bring in everything.  My question is , Is there a way to manually run the user import to fetch everything from scratch ?   These are our settings
Hello, I have a situation where I am trying to pull from within a field the nomenclature of ABC-1234-56-7890 but want to be able to only pull the first three letters and the last four numbers into ... See more...
Hello, I have a situation where I am trying to pull from within a field the nomenclature of ABC-1234-56-7890 but want to be able to only pull the first three letters and the last four numbers into one field. I have the following query below thus far but have not figured out how to do as described above: | rex field=comment (?<ABC>ABC\-\d+\-\d+\-\d+) I want the return of "ABC-7890" What am I missing so that I can successfully pull both beginning and end of the above described string? Thanks!
While preparing to upgrade of an indexer cluster with RF=1 I'm wondering what's the effective behaviour of a cluster in maintenance mode with this RF. If an indexer goes down because of the upgrade... See more...
While preparing to upgrade of an indexer cluster with RF=1 I'm wondering what's the effective behaviour of a cluster in maintenance mode with this RF. If an indexer goes down because of the upgrade activity and restart, there is no data to replicate to other nodes anyway so no fixups should occur. So maintenance mode does not really do much in this case, am I right?
Hi there, so I have a line of log like this: http://some.url/path/?param=x,y,z  So I want to extract a field "extractedParam" with the value "x,y,z". Then I want to extract the three values i... See more...
Hi there, so I have a line of log like this: http://some.url/path/?param=x,y,z  So I want to extract a field "extractedParam" with the value "x,y,z". Then I want to extract the three values into a multivalue field "mvExtractedParam". Within Splunk Cloud I will use a field extraction with the following regex which is wrapped up by a field transformation (here I can check "Create multivalued fields"). So I try to do everything within one regex, but here I am struggling. \?param=(?<extractedParam>.*) This extracts "x,y,z". Right now I dont know how to chain the next step...   All the best, Marco
Hello I have installed Splunk Enterprise on Ubuntu 20.04 two times now, but I get warnings from licensing when adding sources. I installed a 5GB/days license and added a syslog udp/1514 and a new... See more...
Hello I have installed Splunk Enterprise on Ubuntu 20.04 two times now, but I get warnings from licensing when adding sources. I installed a 5GB/days license and added a syslog udp/1514 and a new index. After this splunk starts complaining about:   This deployment is subject to license enforcement. Search is disabled after 45 warnings over a 60-day window Learn more Licensing alerts notify you of excessive indexing warnings and licensing misconfigurations     1 cle_pool_over_quota message reported by 1 indexer Correct by midnight to avoid warning   Can anyone help me in the right direction ? The total amout of data = 0MB, so this is clearly not correct. Regards, Jon
Hi everyone,   I have an issue with upgrade splunk universal forwarder 7.3.3 to 8.1.3 (windows platform). During our investigation, we found that the problem only occurs on machines that were prev... See more...
Hi everyone,   I have an issue with upgrade splunk universal forwarder 7.3.3 to 8.1.3 (windows platform). During our investigation, we found that the problem only occurs on machines that were previously operated by UF 6.5.2. We tried a few tricks with msi package recache, repair or uninstall, but can't find a solution to install version 8.1.3. No problem going back to version 7.3.3, we do the standard install and everything works fine. No matter what we do, in the 8.1.3 installation log we still find that the msi installer is finding a previous version of product 6.5.2! (we have work station 7.3.3) Do you have an idea what we can try to do?
Hello, i would like to improve Escalation Policy in our organization. Currently everyone has another settings, but we want to introduce one standard for each user - is that possible?  If yes, coul... See more...
Hello, i would like to improve Escalation Policy in our organization. Currently everyone has another settings, but we want to introduce one standard for each user - is that possible?  If yes, could you give me some tip's/template how we can do this with your support?
Prior to upgrade from 8.1 to 8.2 I'm reading https://docs.splunk.com/Documentation/Splunk/8.2.0/Indexer/Reducetsidxdiskusage#The_tsidx_writing_level and one thing is not entirely clear to me. A cha... See more...
Prior to upgrade from 8.1 to 8.2 I'm reading https://docs.splunk.com/Documentation/Splunk/8.2.0/Indexer/Reducetsidxdiskusage#The_tsidx_writing_level and one thing is not entirely clear to me. A change to the tsidxWritingLevel is applied to new index bucket tsidx files. There is no change to the existing tsidx files. A change to the tsidxWritingLevel is applied to newly accelerated data models, or after a rebuild of the existing data models is initiated. All existing data model accelerations will not be affected. The first statement is pretty starightforward - if I raise the tsidxWritingLevel, only newly created buckets will be indexed with the new level. That's pretty obvious. But I'm not entirely sure what the description of accelerated data models means. If it worked the same way I'd suspect that already created summaries should be left as-is on their own level but newly created summary "buckets" (are they still called that in case of datamodel accelerated summaries?) should be created with the new level. Is that so? Or does it apply to whole acceleration summary only after a complete rebuild? That would be kinda unfortunate especially since I have some huge accelerated datamodels.
Hello, What could be the explanation for a Correlation Search that is set to run live, on the Next Scheduled Time tab in /app/SplunkEnterpriseSecuritySuite/ess_content_management it appears that th... See more...
Hello, What could be the explanation for a Correlation Search that is set to run live, on the Next Scheduled Time tab in /app/SplunkEnterpriseSecuritySuite/ess_content_management it appears that the Next Scheduled Time to be in the past. (today is 3rd of march) This is also not triggering any events in the Incident Review Tab in Enterprise Security app.   Thanks to anyone that can give any hints I appreciate  
I have a field(eventCode)  which has a code values, and few of them ends with certain alphabets , I want to extract only the eventCode which ends with E, F, V and display it separately under differen... See more...
I have a field(eventCode)  which has a code values, and few of them ends with certain alphabets , I want to extract only the eventCode which ends with E, F, V and display it separately under different fields/names(minor, major, medium). I tried with | where eventCode=*E, but this doesnot work.. Is there any other way to extract other than rex/regex. If not, can you please provide some input.  Exmaple : eventCode=xyxbxsndsndg-5-3000-E eventCode=aksjdjfdfvbrhgnvfmbfbc-54-3601-E eventCode=plgkdfdcmasjenfmdklv-61-2501-F eventCode= pojdksdjhmmmaskxjs-91-4501-V Result : Minor                                                              Major                                                                         xyxbxsndsndg-5-3000-E                                       plgkdfdcmasjenfmdklv-61-2501-F             aksjdjfdfvbrhgnvfmbfbc-54-3601-E
I have 2 Splunk SPLs ===================== index=computer_admin source=admin_priv sourcetype=prive:db account_name=admin earliest=-1d | fields comp_name,comp_role,account_name,local_gp,gp_name | ... See more...
I have 2 Splunk SPLs ===================== index=computer_admin source=admin_priv sourcetype=prive:db account_name=admin earliest=-1d | fields comp_name,comp_role,account_name,local_gp,gp_name | table comp_name,comp_role,account_name,local_gp,gp_name ===================== The comp_name fields has values such as ,  AAAAA, BBBBB,  CCCCC, AFSGSH, GFDFDF, IUYTE, HGFDJ, ZZZZZ, YYYYYY, IIIIII, EEEEEE Basically I am looking for all the comp_names that the admin is on and copying the list to use in another SPL  to get the comp owners. Second SPL : =================== index=computer_admin  source=emp_card_details  sourcetype="something:db" C_NAME IN (AAAAA, BBBBB,  CCCCC, AFSGSH, GFDFDF, IUYTE, HGFDJ, ZZZZZ, YYYYYY, IIIIII, EEEEEE) | eval arl=lower(C_NAME) | stats values(asset_owner) by arl =================== Can we use subsearch or any thing similar to get it done in on SPL ? Any assistance ?
Hello, I try to count and compare the max amount of used different devices each day by groups for a week with the maximal available resources. For each day I count a different amount of used device... See more...
Hello, I try to count and compare the max amount of used different devices each day by groups for a week with the maximal available resources. For each day I count a different amount of used devices per related group. For a week I want to determine the max. value for each group and compare this value with a predefined max available value. With a a query like this: <search> | timechart span=1d dc(devicename) by groupname                       <Last 7 days> I get a table like this _time             Group1      Group2    Group3 ... 7.1.2022       4                  8                 1 8.1.2022       2                  3                 0 9.1.2022       6                  2                 0 ... How I tried to calculate the max value of each column (Group) and compare it with a predefined value for the group? With timecharts I didn't success. timechart doesn't pass the the value through a next command?
I am trying to index a small CSV file with only 1 column (both with monitoring and manually ) is it impossible  ?   was able to index only after I added additional column  for monitoring I have... See more...
I am trying to index a small CSV file with only 1 column (both with monitoring and manually ) is it impossible  ?   was able to index only after I added additional column  for monitoring I have defined the below   inputs.conf    [monitor:///opt/mailboxes_not_created_empid/*.csv] disabled = 0 sourcetype = csv_current_time index = mailboxes_not_created_empid crcSalt = <SOURCE> initCrcLength = 512   the csv (comma separated ) file is    Employee_Number 141941 180536 189377  
1.Which firewall port is used for SPLUNK integration with EPM SaaS? 2.Any idea about the volume of events received in megabytes per day?  
Hi at all, this is a different question than usual: I received an eMail from Splunk Accreditations Team <admin@mindtickle.com> with the following subject: "Accreditation assessment module has moved... See more...
Hi at all, this is a different question than usual: I received an eMail from Splunk Accreditations Team <admin@mindtickle.com> with the following subject: "Accreditation assessment module has moved!". I didn't required any Accreditation assessment and I don't know this eMail address. Does anyone know this email? Ciao and thanks. Giuseppe
Hello everyone, I have a correlation search setup to detect Suricata IDS alerts of a specific severity and trigger a notable as response action to ES. I would like to know if there is a way to op... See more...
Hello everyone, I have a correlation search setup to detect Suricata IDS alerts of a specific severity and trigger a notable as response action to ES. I would like to know if there is a way to optimize my search and transform it into tstats one in order to optimize the speed and performance. My current search:       index=suricata sourcetype=suricata event_type=alert alert.severity=1         I have Datamodel "Intrusion Detected" populated with suricata logs (also accelerated). But I would like to know if I can take advantage of the acceleration and use a tstats command in my correlation search in order to save some resources. Thank you in advance. Regards, Chris  
Hello All, I am working on the installing and getting data In for SC4S(Splunk connect for Syslog). For installation I referred below link and the service is running without any error. https://s... See more...
Hello All, I am working on the installing and getting data In for SC4S(Splunk connect for Syslog). For installation I referred below link and the service is running without any error. https://splunk.github.io/splunk-connect-for-syslog/main/gettingstarted/quickstart_guide/ But the problem am facing is testing the data in , as mentioned in document am trying to send data in UDP port, but some how its giving below error. Command am running: bash echo “Hello SC4S” > /dev/udp/<SC4S_ip>/514 Error am getting as below bash: /dev/udp/: Is a directory Note: Currently the test data send from UDP port is same as SC4S installed machine, as am doing POC on SC4S so only one machine am using for both. Please can any one help me on this, as am not finding any related question on this same.
Hello there, team! I'm hoping someone can assist me with this requirement or confirm whether a solution exists . I need to filter specific log types and construct "Real-time" dashboards from them.... See more...
Hello there, team! I'm hoping someone can assist me with this requirement or confirm whether a solution exists . I need to filter specific log types and construct "Real-time" dashboards from them. Is there a service that can assist me with this? The dashboard should be able to be viewed in real time and should be self-contained once set up. I'm hoping for the team's expertise to come through and present me with a solution as soon as feasible.
Hi Splunk team, I have a question when I search in Splunk console. I got an issue as below:  Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too ... See more...
Hi Splunk team, I have a question when I search in Splunk console. I got an issue as below:  Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. Renew your Splunk license by visiting www.splunk.com/store or calling 866.GET.SPLUNK. And I used an enterprise license. Does anyone have an idea about this case? appreciate it.