All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi - Is there a way to get 2 nonstream Searches to run in parallel in the same SPL? I am using "appendcols", but I think one is waiting for the other to finish. I can't use multisearch as I don't... See more...
Hi - Is there a way to get 2 nonstream Searches to run in parallel in the same SPL? I am using "appendcols", but I think one is waiting for the other to finish. I can't use multisearch as I don't have stream commands. The issue is displaying the license used by Splunk and I want to run 2 SPL in parallel. However, it's very slow to run if I run 2 in sequence. Thanks in advance for any help   index=_internal [ `set_local_host`] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | search pool = "*" | search idx != "mlc_log_drop" | timechart span=1d sum(b) AS Live_Data fixedrange=false | fields - _timediff | foreach * [ eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)] | appendcols [ search index=_internal [ `set_local_host`] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | search pool = "*" | search idx = "mlc_log_drop" | timechart span=1d sum(b) AS Log_Drop_Data fixedrange=false | fields - _timediff | foreach * [ eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)]]      
Hi I have splunk Enterprise environment. After doing SAML Configuration via frontend it's not redirecting to portal after authentication. What can be the reason?  
Hi, I'm running the curl command:   curl -vvvvv https://prd-p-xxxxx.splunkcloud.com:8088/services/collector/event -H "Authorization: Splunk <token>" -d '{"sourcetype": "my_sample_data", "event": "p... See more...
Hi, I'm running the curl command:   curl -vvvvv https://prd-p-xxxxx.splunkcloud.com:8088/services/collector/event -H "Authorization: Splunk <token>" -d '{"sourcetype": "my_sample_data", "event": "ping"}'   and I got: * Trying <IP>:8088... * connect to <IP> port 8088 failed: Operation timed out * Failed to connect to prd-p-xxxxx.splunkcloud.com port 8088 after 17497 ms: Couldn't connect to server * Closing connection 0 curl: (28) Failed to connect to prd-p-xxxxx.splunkcloud.com port 8088 after 17497 ms: Couldn't connect to server I have free trial, HEC is enabled and token is valid what could cause this problem?
Hello, we got following error by setting up AbuseIPDB Api Key setup Page: (Splunk Version 9.0.6)   Is there another way to put in the api key     chears     
After some help. Is there any way to get this to use a custom port for the 2 server that use a non 443 port? | makeresults | eval dest="url1,url2,url3", dest = split (dest,",") | mvexpand dest | ... See more...
After some help. Is there any way to get this to use a custom port for the 2 server that use a non 443 port? | makeresults | eval dest="url1,url2,url3", dest = split (dest,",") | mvexpand dest | lookup sslcert_lookup dest OUTPUT ssl_subject_common_name ssl_subject_alt_name ssl_end_time ssl_validity_window | eval ssl_subject_alt_name = split(ssl_subject_alt_name,"|") | eval days_left = round(ssl_validity_window/86400) | table ssl_subject_common_name ssl_subject_alt_name days_left ssl_issuer_common_name | sort days_left   I tried adding the port to the first eval e.g. | eval dest="url1,url2,url3",  dest_port=8443 , dest = split (dest,",")   Would be great if both the standard and custom could be returned together.
Hi, I am checking for underscore in field values and if it present then capture that value. For Example: if name has underscore in it then value should get assigned to APP field and if it does not ... See more...
Hi, I am checking for underscore in field values and if it present then capture that value. For Example: if name has underscore in it then value should get assigned to APP field and if it does not have underscore in it then value should get assigned to Host field name         APP           Host ftr_score ftr-score  NA terabyte   NA              terabyte I have tried using case and like statement but it does not work as expected  
Hi, I'm curious to know when the logs will be indexed after the incident triggered in Splunk. Thanks  
I have a requirement to check if a employee shift roster(lookup in Splunk) covers 24 hours in a day for each team. If it doesn't cover, I need to send out an alert to the respective team notifying th... See more...
I have a requirement to check if a employee shift roster(lookup in Splunk) covers 24 hours in a day for each team. If it doesn't cover, I need to send out an alert to the respective team notifying them that their respective shift roster is not configured properly. Can anybody help me out as to how I can proceed in this. The employee_shift_roster.csv looks something like this: Start time End time Team Employee Name Available 8:00 5:30 Team A Roger Y 5:30 8:00 Team A Federer Y 8:00 5:30 Team B Novak Y 5:30 7:00 Team B Djokovic Y   Now the alert should go out to Team B stating that their shift roster is not configured properly because 24 hours are not cover in shift. Thanks in advance for the help
Hello, I am using Apache Tomcat for my application. Using AppDynamics console, I downloaded the Java Agent for my application. After adding the agent path under setenv.bat for Apache Tomcat and runn... See more...
Hello, I am using Apache Tomcat for my application. Using AppDynamics console, I downloaded the Java Agent for my application. After adding the agent path under setenv.bat for Apache Tomcat and running the server. I do get a notification saying "Started AppDynamics Java Agent Successfully". However, when I navigate to Applications tab in AppDynamics console, I don't see any metrics, also under application Agents, I dont see any agent registered. I verified the controller-info.xml file for the agent and it consists all the parameters to send details to my instance. But the metrics are not reported. Please help.
Machine agent is not starting. I downloaded the machine agent using AppDynamics login, which provided me with pre-configured setup for my account. When I try to run the Agent using java - Machineagen... See more...
Machine agent is not starting. I downloaded the machine agent using AppDynamics login, which provided me with pre-configured setup for my account. When I try to run the Agent using java - Machineagent.jar, I only see below details. The agent is not initializing: 2023-11-20 11:27:49.417 Using Java Version [11.0.20] for Agent 2023-11-20 11:27:49.417 Using Agent Version [Machine Agent v23.10.0.3810 GA compatible with 4.4.1.0 Build Date 2023-10-30 07:13:09] Earlier the agent was starting but was not reporting CPU, Disk and Memory metrics. It only showed the running process but no metrics data. Please suggest
Below query is producing the results  index="jenkins" sourcetype="json:jenkins" job_name="$env$_Group*" event_tag=job_event type=completed | search job_name=*"Group06"* OR job_name=*"Group01"* ... See more...
Below query is producing the results  index="jenkins" sourcetype="json:jenkins" job_name="$env$_Group*" event_tag=job_event type=completed | search job_name=*"Group06"* OR job_name=*"Group01"* | head 2 | dedup build_number | stats sum(test_summary.passes) as Pass | fillnull value="Test Inprogress..." Pass but not this query. $group$ - dropdown selected option is Group06 index="jenkins" sourcetype="json:jenkins" job_name="$env$_Group*" event_tag=job_event type=completed | eval rerunGroup = case("$group$"=="Group06", "Group01", "$group$"=="Group07", "Group02", "$group$"=="Group08", "Group03", "$group$"=="Group09", "Group04", "$group$"=="Group10", "Group05",1==1, "???") |''' table rerunGroup - This shows Group01 in the table ''' | search job_name=*$group$* OR job_name=*rerunGroup* | head 2 | dedup build_number | stats sum(test_summary.passes) as Pass | fillnull value="Test Inprogress..." Pass No big difference except Eval statement and passing the variable value.  Can someone please help
Hello Experts,   I was wondering if you can help me figure out how do I show the merged values in a field as 'unmerged' when use 'values' in stats command   (DETAILS_SVC_ERROR) and (FARE/... See more...
Hello Experts,   I was wondering if you can help me figure out how do I show the merged values in a field as 'unmerged' when use 'values' in stats command   (DETAILS_SVC_ERROR) and (FARE/PRCNG/AVL-MULT. RSNS) are different values .... coming as merged as an example, its merging all values in one when used "Values" OR "List" how to unmerge same If I use 'mvexpand' it then expands to single count even if the values are same   Thanks in advance Nishant
Hi all, I'm trying to get data into CrowdStrike Intel Indicatos Technical Add-On follow this guide in US Commercial 2 cloud enviroment. I realized that I can't find Indicators (Falcon Intelligence) ... See more...
Hi all, I'm trying to get data into CrowdStrike Intel Indicatos Technical Add-On follow this guide in US Commercial 2 cloud enviroment. I realized that I can't find Indicators (Falcon Intelligence) permission of API token like that document mentioned. After that, I found that it has IOCs (Indicators of Compromise), Actors (Falcon Intelligence), Reports (Falcon Intelligence)so I checked that.  But, it still have error "ACCESS DENIED" like:     ERROR pid=6180 tid=MainThread file=base_modinput.py:log_error:317 | CrowdStrike Intel Indicators TA 3.1.3 CrowdStrike_Intel_Indicators: Error contacting the CrowdStrike Device API, please provide this TraceID to CrowdStrike support = <device_id> ERROR pid=6180 tid=MainThread file=base_modinput.py:log_error:317 | CrowdStrike Intel Indicators TA 3.1.3 CrowdStrike_Intel_Indicators: Error contacting the CrowdStrike Device API, error message = access denied, authorization failed ERROR pid=6180 tid=MainThread file=base_modinput.py:log_error:317 | CrowdStrike Intel Indicators TA 3.1.3 CrowdStrike_Intel_Indicators: TA is shutting down     I have already used the same API token for CrowdStrike Event Streams Technical Add-On and it works normally. Please help me to fix this! Thank you.
When running a query with distinctcount in the UI it works as expected and returns multiple results. But when run via API I only get 20 results no matter the limit: curl --location --request PO... See more...
When running a query with distinctcount in the UI it works as expected and returns multiple results. But when run via API I only get 20 results no matter the limit: curl --location --request POST 'https://analytics.api.appdynamics.com/events/query?end=1700243357711&limit=100&start=1700239757711' \ --header 'PRIVATE-TOKEN: glpat-cQefeyVa51vG__mLtqM6' \ --header 'X-Events-API-Key: $API_KEY' \ --header 'X-Events-API-AccountName: $ACCOUNT_NAME' \ --header 'Content-Type: application/vnd.appd.events+json;v=2' \ --header 'Accept: application/vnd.appd.events+json;v=2' \ --header 'Authorization: Basic <bearer>' \ --data-raw '{"query":"SELECT toString(eventTimestamp), segments.node, requestGUID, distinctcount(segments.tier) FROM transactions","mode":"scroll"}   result: [ { "label": "0", "fields": [ { "label": "toString(eventTimestamp)", "field": "toString(eventTimestamp)", "type": "string" }, { "label": "segments.node", "field": "segments.node", "type": "string" }, { "label": "requestGUID", "field": "requestGUID", "type": "string" }, { "label": "distinctcount(segments.tier)", "field": "segments.tier", "type": "integer", "aggregation": "distinctcount" } ], "results": [ [ "2023-11-17T16:55:14.472Z", "node--1", "82bbb595-88b7-4e81-9ded-56cb5a42c251", 1 ], [ "2023-11-17T16:55:14.472Z", "node--7", "c7785e77-deb9-4aff-93fe-35efa7299871", 1 ], [ "2023-11-17T16:55:22.777Z", "node--7", "3c22d9dd-74a3-496c-b8b9-1c4ce48a6b1f", 1 ], [ "2023-11-17T16:55:22.777Z", "node--7", "d86e97ff-91c8-45f9-832e-ec6f06ffff26", 1 ], [ "2023-11-17T16:55:29.959Z", "node--1", "44abeb53-30e6-4973-8afb-f7cfc52a9ba7", 1 ], [ "2023-11-17T16:55:29.959Z", "node--1", "b577b0b0-2d41-4dfa-aceb-e42b5bb39348", 1 ], [ "2023-11-17T16:56:55.468Z", "node--1", "2f92e785-8cd0-4028-acd7-21fa794eb004", 1 ], [ "2023-11-17T16:56:55.468Z", "node--1", "af7ca09b-c4fc-4c73-8502-26d92b2b5835", 1 ], [ "2023-11-17T16:58:13.694Z", "node--1", "4a04304b-f48f-4c8d-9034-922077dfcdb4", 1 ], [ "2023-11-17T16:58:13.694Z", "node--7", "22755c60-efa0-434f-be73-0f02f0222021", 1 ], [ "2023-11-17T17:00:36.983Z", "node--1", "b6386249-6408-4517-812c-ca3d5e6c304c", 1 ], [ "2023-11-17T17:00:36.983Z", "node--1", "f7c63152-b569-42bf-8d37-32bb181d7028", 1 ], [ "2023-11-17T17:05:50.737Z", "node--1", "306c7cb3-0eff-440e-96e1-4ad61ce01a0c", 1 ], [ "2023-11-17T17:05:50.737Z", "node--1", "b33c04bd-dfd6-452a-b425-7639dad0422d", 1 ], [ "2023-11-17T17:08:24.554Z", "node--7", "0ed8c3a6-a59c-4fee-a10e-e542c239aa94", 1 ], [ "2023-11-17T17:08:24.554Z", "node--7", "c0e6f198-d388-49da-96a6-d9d95868c40f", 1 ], [ "2023-11-17T17:10:06.483Z", "node--1", "cf3caef6-2688-4eac-a2a1-b2edb55f0569", 1 ], [ "2023-11-17T17:10:06.483Z", "node--7", "40f5a786-e12d-4b0c-a54e-a330e7b7afd1", 1 ], [ "2023-11-17T17:11:47.167Z", "node--7", "02c3a025-6def-499d-9315-68551df639d4", 1 ], [ "2023-11-17T17:11:47.167Z", "node--7", "f19df1e3-983d-4a72-a5e0-a3f9ffbb7afe", 1 ] ], "moreData": true, "schema": "biz_txn_v1" } ]
Hi Splunkers Currently, I have 8 indexers and about 100 indexes! Here is a sample of my indexes.conf:   # volumes [volume:HOT] path = /Splunk-Storage/HOT maxVolumeDataSizeMB = 2650000 [volume:COL... See more...
Hi Splunkers Currently, I have 8 indexers and about 100 indexes! Here is a sample of my indexes.conf:   # volumes [volume:HOT] path = /Splunk-Storage/HOT maxVolumeDataSizeMB = 2650000 [volume:COLD] path = /Splunk-Storage/COLD maxVolumeDataSizeMB = 27500000 ### indexes ### [testindex1] repFactor = auto homePath = volume:HOT/testindex1 coldPath = volume:COLD/testindex1 thawedPath = /Splunk-Storage/COLD/testindex1/thaweddb summaryHomePath = /Splunk-Storage/HOT/testindex1/summary frozenTimePeriodInSecs = 47520000 [testindex2] repFactor = auto homePath = volume:HOT/testindex2 coldPath = volume:COLD/testindex2 thawedPath = /Splunk-Storage/COLD/testindex2/thaweddb summaryHomePath = /Splunk-Storage/HOT/testindex2/summary frozenTimePeriodInSecs = 47520000   I don't restrain my indexes by size, only by time. The current median age of all data is about 180 days. Regarding my fstab file:   /dev/mapper/mpatha-part1 /Splunk-Storage/HOT xfs defaults 0 0 /dev/mapper/mpathb-part1 /Splunk-Storage/COLD xfs defaults 0 0   Now, for compliance reasons, I want to separate two of my indexes to preserve them for a longer duration (at least two years). I have considered two possible methods to accomplish this: 1. a. Create a different path and volume. b. Stop all indexers. c. Move the two indexes to the new path. d. Start all indexers. If I'm correct, the issue is that I can't move just two indexes because I didn't mount different paths in the OS. Therefore, I would have to move all other indexes to another path. Essentially, this means creating two paths and volumes in both my OS and indexes.conf. 2. a. Decrease the frozenTimePeriod for all indexes except the two to, for example, 150 days. b. Wait for Splunk to free up some disk space. c. Increase the frozenTimePeriod for those two indexes to, for example, 730 days. The second solution may seem more straightforward, but I'm uncertain if it is a best practice or a good idea at all. Could you please guide me on how to implement the first solution with minimal downtime? Thank you in advance for your assistance!
Hello, I have some issues to perform field extractions using transform configuration. It's not giving field value pairs as expected. Sample events and configuration files are given below. Some non-u... See more...
Hello, I have some issues to perform field extractions using transform configuration. It's not giving field value pairs as expected. Sample events and configuration files are given below. Some non-uniformities within the events are also marked in Bold. Any recommendations will be highly appreciated. Thank you so much. My Configuration Files [mypropfConf] REPORT-mytranforms=myTransConf [myTransConf] REGEX = ([^"]+?):'([^"]+?)' FORMAT = $1::$2 Sample Events 2023-11-15T18:56:29.098Z OTESTN097MA4515620 TESTuser20248: UserID: '90A', UserType: 'TempEMP', System: 'TEST', UAT: 'UTA-True', EventType: 'TEST', EventID: 'Lookup', Subject: 'A5367817222', Scode: '' EventStatus: 0, TimeStamp: '2023-11-03T15:56:29.099Z', Device: 'OTESTN097MA4513020', Msg: 'lookup ok', var: 'Sec' 2023-11-15T18:56:29.021Z OTESTN097MB7513020 TESTuser20249: UserID: '95B', UserType: 'TempEMP', System: 'TEST', UAT: 'UTA-True', EventType: 'TEST', EventID: 'Lookup', Subject: 'A516670222', Scode: '' EventStatus: 0, TimeStamp: '2023-11-03T15:56:29.099Z', Device: 'OTESTN097MA4513020', Msg: 'lookup ok', var: 'tec' 2023-11-15T18:56:29.009Z OTESTN097MB9513020 TESTuser20248: UserID: '95A', UserType: 'TempEMP', System: 'TEST', UAT: 'UTA-True', EventType: 'TEST', EventID: 'Lookup', Subject: 'A546610222', Scode: '' EventStatus: 0, TimeStamp: '2023-11-03T15:56:29.099Z', Device: 'OTESTN097MA4513020', Msg: 'lookup ok', var: 'test'  
I have tried to simplify the query for better understanding and removing some unnecessary things. This query is to find out if the same malware has been found on more than 4 hosts (dest) in a give... See more...
I have tried to simplify the query for better understanding and removing some unnecessary things. This query is to find out if the same malware has been found on more than 4 hosts (dest) in a given time span, something like a malware outbreak. Below is the indexed based query that works fine. I am trying to convert this to a data model based query, but not getting the desired results. I am new to writing data model based queries. Thanks for all the help! (`cim_Malware_indexes`) tag=malware tag=attack | eval grouping_signature=if(isnotnull(file_name),signature . ":" . file_name,signature) => trying to create a new field called "grouping_signature" by concatenating signature and file_name fields | stats count dc(dest) as infected_device_count BY grouping_signature => trying to calculate the distinct count of hosts the have the same malware found on them by "grouping_signature" field | where infected_device_count > 4 => trying to find events where number of infected devices is greater than 4 | stats sum(count) AS "count" sum(infected_device_count) AS infected_device_count BY grouping_signature => trying to find the total number of infected hosts by "grouping_signature" field
Hi all, I'm having difficulty crafting regex that will extract a field that can have either 1 or multiple words. Using the "add field" in Splunk Enterprise doesn't seem to be able to get the job do... See more...
Hi all, I'm having difficulty crafting regex that will extract a field that can have either 1 or multiple words. Using the "add field" in Splunk Enterprise doesn't seem to be able to get the job done either.  The field I would like to extract is for the "Country" which can be 1 word or multiple words. Any help would be appreciated. Below is my regex and a sample of the logs from which I am trying to extract fields. I don't consider myself to be a regex guru so don't laugh at my field extraction regex. It works on everything except The country. User\snamed\s(\w+\s\w+)\sfrom\s(\w+)\sdepartment\saccessed\sthe\sresource\s(\w+\.\w{3})(\/\w+\.*\/*\w+\.*\w{0,4})\sfrom\sthe\ssource\sIP\s(\d+\.\d+\.\d+\.\d+)\sand\scountry\s\W(\w+\s*)   11/17/23 2:25:22.000 PM [Network-log]: User named Linda White from IT department accessed the resource Cybertees.THM/signup.html from the source IP 10.0.0.2 and country France at: Fri Nov 17 14:25:22 2023 host = ***** source = networks sourcetype = network_logs [Network-log]: User named Robert Wilson from HR department accessed the resource Cybertees.THM/signup.html from the source IP 10.0.0.1 and country United States at: Fri Nov 17 14:25:11 2023 host = ***** source = networks sourcetype = network_logs 11/17/23 2:25:21.000 PM [Network-log]: User named Christopher Turner from HR department accessed the resource Cybertees.THM/products/product2.html from the source IP 192.168.0.100 and country Germany at: Fri Nov 17 14:25:17 2023 host = ***** source = networks sourcetype = network_logs
Hi all I created an environment with following instances: cluster master three search heads four indexers heavy forwarder license server deployment server deployer We have more that 50 cli... See more...
Hi all I created an environment with following instances: cluster master three search heads four indexers heavy forwarder license server deployment server deployer We have more that 50 clients so that I deployed the deployment server on a dedicated server. We have some indexes but one of them (say index named A) have about 35K per minute events. The heavy forwarder load balances the events between four indexers. The replication factor is 4 and the search factor is 3. A simple search like 'index=A' can return about 17M events at about 5 minutes. I want to speed up the search on the index A. I can change whole deployment and environment if anyone has an idea about speeding up the search.I would be grateful If anyone could help me about parameters like replication factor or search factor, number of indexers and... to speed up the search. Thank you.
Hey! So Im using an EC2 splunk ami and have all the correct apps loaded but cannot for the life of me get the boss v1 data in my environment.  I've put it into $SPLUNK_HOME/etc/apps (as mentioned in... See more...
Hey! So Im using an EC2 splunk ami and have all the correct apps loaded but cannot for the life of me get the boss v1 data in my environment.  I've put it into $SPLUNK_HOME/etc/apps (as mentioned in github) and it did not work, it simply does not pick up that this is a data set and instead is comfortably in my apps.  Loading it in other ways means it doesnt come through correctly.  Is this a timestamp issue?   Any help would be so appreciated