All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am wondering if one can relate extracted fields to a specific source rather than a source type. The event structure that comes in changes based on the source even though they are the same source ty... See more...
I am wondering if one can relate extracted fields to a specific source rather than a source type. The event structure that comes in changes based on the source even though they are the same source type and so one extracted field will get the correct value and another will not if I don't specify the source I am looking for. I also have the problem where events of the same host, source, and source type will have different event structures from event to event. I am not sure how to extract the fields from this situation either.  Any suggestions are much appreciated.
I am upgrading splunk UF from 8.0.5 to V 9.0.0 in my all linux flavours (rhel,6,7,8 ,amz 2018,2 and cent7).It got installed properly except rhel6 and amazon-2018. When I am trying to execute below co... See more...
I am upgrading splunk UF from 8.0.5 to V 9.0.0 in my all linux flavours (rhel,6,7,8 ,amz 2018,2 and cent7).It got installed properly except rhel6 and amazon-2018. When I am trying to execute below command through automated script it got hung but surprisingly when I execute same command from tty ,its working fine. I used both shell (sh and bash) in my shebang.   I found few hung child  processes when I did ps -eaf | grep splunk "/opt/splunkforwarder/bin/splunk start --accept-license --no-prompt"
Hi All, I have two set of logs in two different sources in splunk, one containing the predefined list of VPNs and Queues and the other containing list of VPNs and Queues where activity is done. Now... See more...
Hi All, I have two set of logs in two different sources in splunk, one containing the predefined list of VPNs and Queues and the other containing list of VPNs and Queues where activity is done. Now I have to compare the list of the 2nd set with the 1st set and create a table. Set 1: log:  10.96.195.70/SEMP/v2/monitor/msgVpns/CPGC_S_SIT/queues/gcg.apac.eventcloud.publish.hk.twa/txFlows 10.96.195.70/SEMP/v2/monitor/msgVpns/EHEM_S_SIT/queues/gcg.emea.eventcloud.publish.ae.cops/txFlows 10.96.195.70/SEMP/v2/monitor/msgVpns/EHEM_S_SIT/queues/gcg.emea.eventcloud.publish.ae.sendprioritycommandretrycomm/txFlows 10.96.195.70/SEMP/v2/monitor/msgVpns/EHEM_S_SIT/queues/gcg.emea.eventcloud.publish.ae.triggerbatchtocommhub/txFlows 10.96.195.70/SEMP/v2/monitor/msgVpns/ALERTSGC_S_SIT/queues/gcg.ap.cop_163124.card.lightning.eas.queue So I created the below query to get the VPN & Queue: *** | source="*/sit_solace_updated.txt" | rex field=_raw max_match=0 "msgVpns\/(?P<VPN_Name>[^\/]+)\/queues\/(?P<Queue_Name>[^\/]+)\/" | table VPN_Name,Queue_Name | eval row=mvrange(0,mvcount(VPN_Name)) | mvexpand row | foreach *_* [| eval <<FIELD>>=mvindex(<<FIELD>>,row)] | fields - row Set 2: log1:  "activationTime":1652444666, "clientName":"source.solace.emea.ae.custmgt.cops.cops.raw.int.rawevent-05d1d1b@gtgcb-csrla01s.nam.nsroot.net/59133/#00a00005/VIzOONZsrL", "msgVpnName":"EHEM_S_SIT", "queueName":"gcg.emea.eventcloud.publish.ae.cops", log2: "activationTime":1650620233, "clientName":"BW-ALERTS_JMS_CONN-queue-ALERTSServices-1-1-AlertServices_ALERTSServices_a37s_01", "msgVpnName":"ALERTSGC_S_SIT", "queueName":"gcg.ap.cop_163124.card.lightning.eas.queue", And here I used the below query to create a table: *** | source="*/final_sol_queue.txt" | rex field=_raw "Time\"\:(?P<Act_Time>[^\,]+)\," | rex field=_raw "clientName\"\:\"(?P<Client_Name>[^\"]+)\"\," | rex field=_raw "VpnName\"\:\"(?P<VPN_Name>[^\"]+)\"\," | rex field=_raw "queueName\"\:\"(?P<Queue_Name>[^\"]+)\"\," | eval Activation_Time=strftime(Act_Time,"%a, %d %b %Y %H:%M:%S") | table Activation_Time,Client_Name,VPN_Name,Queue_Name | dedup Activation_Time,Client_Name,VPN_Name,Queue_Name it gave the below table: Activation_Time Client_Name VPN_Name Queue_Name Fri, 22 Apr 2022 15:28:39 BW-ALERTS_JMS_CONN-queue-ALERTSServices-1-1-AlertServices_ALERTSServices_a37s_01 ALERTSGC_S_SIT gcg.ap.cop_163124.card.lightning.eas.queue Fri, 13 May 2022 17:54:26 source.solace.emea.ae.custmgt.cops.cops.raw.int.rawevent-05d1d1b@gtgcb-csrla01s.nam.nsroot.net/59133/#00a00005/VIzOONZsrL EHEM_S_SIT gcg.emea.eventcloud.publish.ae.cops However, I want to create a table that contains the missing VPNs & Queues also from set1 as below: Activation_Time Client_Name VPN_Name Queue_Name Fri, 22 Apr 2022 15:28:39 BW-ALERTS_JMS_CONN-queue-ALERTSServices-1-1-AlertServices_ALERTSServices_a37s_01 ALERTSGC_S_SIT gcg.ap.cop_163124.card.lightning.eas.queue Fri, 13 May 2022 17:54:26 source.solace.emea.ae.custmgt.cops.cops.raw.int.rawevent-05d1d1b@gtgcb-csrla01s.nam.nsroot.net/59133/#00a00005/VIzOONZsrL EHEM_S_SIT gcg.emea.eventcloud.publish.ae.cops Not_Available Not_Available EHEM_S_SIT gcg.emea.eventcloud.publish.ae.triggerbatchtocommhub Not_Available Not_Available EHEM_S_SIT gcg.emea.eventcloud.publish.ae.sendprioritycommandretrycomm Not_Available Not_Available CPGC_S_SIT gcg.apac.eventcloud.publish.hk.twa   Please help to modify the query in a way that it compares set1 and set2 for the VPN & Queue values and produce the table in the above manner. Thank you All..!!
Hi everybody,  I am trying to study for the Splunk IT Service Intelligence Certified Admin exam, but don't currently have the means for the paid online courses. Are there any other free resources t... See more...
Hi everybody,  I am trying to study for the Splunk IT Service Intelligence Certified Admin exam, but don't currently have the means for the paid online courses. Are there any other free resources that would prepare me for the exam besides the free courses offered by Splunk? 
In going through the SplunkCloud SPL tutorial, we are told to upload California drought data into Splunk, and we create a Dashboard from it.  That worked just as explained, but the next day the data ... See more...
In going through the SplunkCloud SPL tutorial, we are told to upload California drought data into Splunk, and we create a Dashboard from it.  That worked just as explained, but the next day the data is gone.  I was using the "AllTime" filter, so it was not that I missed data that was getting older, and skipped by filters. source="us_drought_monitor.csv" State = CA date_year=2018| rex field=County "(?<County>.+) County"| eval droughtscore = D1 + D2*2 + D3*3 + D4*4| stats avg(droughtscore) as "2018 Drought Score" by County| geom ca_county_lookup featureIdField=County above is the search SPL for the demo.  While I have your attention, in the tutorial they add a min and max function, ""2018 Drought Score" max(droughtscore) as "Max 2018 Drought Score" min(droughtscore) as "Min 2018 Drought Score" by County"  This provided SPL code broke the Dashboard yesterday when it was working. is there something wrong with this SPL, that was provided?
What parameter can i modify in limits.conf to solve that? The percentage of non high priority searches delayed (80%) over the last 24 hours is very high and exceeded the red thresholds (20%) on t... See more...
What parameter can i modify in limits.conf to solve that? The percentage of non high priority searches delayed (80%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=13378. Total delayed Searches=10799
Hi, I have below sample event. All field values are getting extracted fine using Splunk's auto extraction. However, some fields are not getting extracted correctly and they get extracted partially.... See more...
Hi, I have below sample event. All field values are getting extracted fine using Splunk's auto extraction. However, some fields are not getting extracted correctly and they get extracted partially. These fields are FindingDetails and FindingDescription highlighted below. How can I get auto extraction in place to extract them all using rex OR how can I extract only these two fields at search time with two separate regexes? All fields are seperated by comma in raw event below and all values are encapsulated in double quotes. Feed is setup via DBConnect running SQL against a DB table and data gets in as is. Splunk Enterprise is @ 8.x. Thanks in advance!!   2022-06-21 10:29:05.000, ID="1234567890", System="SAMPLE", GSystem="SAMPLE", Environment="1 PROD", Datasource="SAMPLE", DBMSProduct="ORACLE", FindingType="Fail", SeverityCode="2 HIGH", SeverityScore="8.0", TestID="1234", TestName="SAMPLE", TestDescription="Application users privileges should be restricted to assignments using application user roles. Granting permissions to accounts is error prone and repetitive. Using roles allows for group management of privileges assigned by function and reduces the likelihood of wrongfully assigned privileges. We recommend assigning permissions to roles and then grant the roles to accounts. This test excludes grantees from a predefined Guardium group called "Oracle exclude default system grantees", APEX_%, ANONYMOUS and PUBLIC grantees. It also excludes grantees for tables SAMPLE and SAMPLE, excludes the DBMS_REPCAT_INTERNAL_PACKAGE table and table name like '%RP'. To exclude certain grantees, you can create an exception group, populate it with authorized grantees and link your group to this test.", FindingDescription="Application users privileges are not restricted to assignments using application user roles. Including 205 items present in test detail exceptions.", FindingDetails="The following users have direct privileges on the tables: Grantee = SAMPLE : Privilege = DELETE : Owner = SAMPLE: Object_name = SAMPLE", TestResultID="123456789", RemediationRule="295", RemediationAssignment="TEST", RemediationAnalysis="Finding passes in the Baseline Configuration. Project needs to initiate tickets in coordination with their DBA to remediate. Specifically a Role needs to be created and the permission flagged in this test added to the role, the user should then be granted to the role and the original permission removed from the user. This is a finding even if documented in the XYZ.", RemediationGuidance="Application users privileges should be restricted to assignments using application user roles. We recommend revoking privileges assigned directly to database accounts and assigning them to roles based on job functions. You can use the following command to revoke privileges: revoke <privilege> on <object name> from <user name>; To exclude certain grantees, you can create an exception group, populate it with authorized grantees and link your group to this test.", ExternalReference="STIG_Reference : STIG_SRG", VersionLevel="19", PatchLevel="19.15.0.0.0", Reference="123", VulnerabilityType="PRIV", ScanTimestamp="2022-06-21 06:29:05.0000000", FirstExecution="2022-01-04 06:38:25.0000000", LastExecution="2022-06-21 06:29:56.0000000", CurrentScore="Fail", CurrentScoreSince="2022-01-04 06:38:25.0000000", CurrentScoreDays="168", Account="TEST", AcknowledgedServiceAccount="Yes", SecurityAssessmentName="TEST", CollectorID="123456789045444444", ScanYear="2022", ScanMonth="6", ScanDay="21", ScanCycle="2", Description="TEST", Host="abcdef.sample.net", Port="0000", ServiceName="SAMPLE"  
Currently we use splunk cloud.  I've added the following into the advanced section of the "Edit Source Type" to mask some fields coming in from auth0: s/\"user_name\"\:\"[^\"]+\"/\"user_name\":\"##... See more...
Currently we use splunk cloud.  I've added the following into the advanced section of the "Edit Source Type" to mask some fields coming in from auth0: s/\"user_name\"\:\"[^\"]+\"/\"user_name\":\"###############\"/g However, while this masks the data in the _raw json, it doesn't appear to mask the data in the data.user_name event dropdown.  See below: data.user_name john doe ########## My question is, is a heavy forwarder setup necessary to mask this data in the events as well?  Thank you.
We had an issue where for a 3 week time our Splunk Cloud Cluster was out of sync.  During that time knowledge objects worked intermittently.  We didn't know at the time that this also meant our field... See more...
We had an issue where for a 3 week time our Splunk Cloud Cluster was out of sync.  During that time knowledge objects worked intermittently.  We didn't know at the time that this also meant our field extractions were also intermittently working so during  that 3 week period our alerts and searches gave faulty/inconsistent results. Splunk told us that health checks for infrastructure, like cluster health, is the responsibility of the customer and not Splunk.  While it is a managed service they are only responsible for upgrades and fixing the cluster if it is down. So what health checks are you using?  I really want to focus on identifying issues that will causes faulty search results but anything we should be alerting on will help.  Currently we are now looking in the logs for issues like: The captain does not share common baseline with * member(s) in the cluster * is having problems pulling configurations from the search head cluster captain   Thank you!
2022-06-12 21:51:42.274 threadId=L4C9D6WIYK2K eventType="RESPONSE" data="<TestRQ>sometestdata</TestRQ>" 2022-06-12 21:51:41.274 threadId=L4C9D6WIYK2K eventType="REQUEST" data="<TestRQ>sometestdata</... See more...
2022-06-12 21:51:42.274 threadId=L4C9D6WIYK2K eventType="RESPONSE" data="<TestRQ>sometestdata</TestRQ>" 2022-06-12 21:51:41.274 threadId=L4C9D6WIYK2K eventType="REQUEST" data="<TestRQ>sometestdata</TestRQ>" 2022-06-12 21:51:40.274 threadId=L4C9D6WIYK2K eventType="HEADER" data="clientIP=101.121.22.11" Hello Team, I have the series of events as shown above and if you see one of the event having eventType="HEADER" I have clientIP in data field . I need to fetch REQUEST and RESPONSE events based on clientIP mentioned in third event of HEADER. Common UNIQUEID between all 3 events is threadID , How can I achieve this in splunk query ? new to splunk i am just good in basic searches. index= test eventType="HEADER" clientIP=101.121.22.11------>> and pass on the threadID to fetch the eventType="REQUEST" eventType="RESPONSE"   @ITWhisperer 
Are there any recommendations on how best to ingest TLS encrypted syslog from AirWatch Cloud Console?  The only settings in AirWatch are to select UDP, TCP or SECURETCP and the port.  I opened a case... See more...
Are there any recommendations on how best to ingest TLS encrypted syslog from AirWatch Cloud Console?  The only settings in AirWatch are to select UDP, TCP or SECURETCP and the port.  I opened a case with VMWare and they suggested to open a case with our SIEM provider for recommedations.  I suspect the recommendation may be to send logs to an rsyslog and then use forwarder to send to splunk since that supports SSL to Splunk.  Has anyone successfully done Encrypted Syslog from Airwatch?
If I remove edit_user as a capability from a Splunk role, will it impact their ability to edit their user preferences?  Is editing user preferences linked to any role capabilities in the first place?
I've got an on-premises Splunk deployment running Enterprise 8.1.2. I keep having a recurring issue where the users report that their searches are being queued due to disk quota. This search cou... See more...
I've got an on-premises Splunk deployment running Enterprise 8.1.2. I keep having a recurring issue where the users report that their searches are being queued due to disk quota. This search could not be dispatched because the role-based disk usage quota of search artifacts for user "jane.doe" has been reached (usage=7757MB, quota=1000MB). Use the Job Manager to delete some of your search artifacts, or ask your Splunk administrator to increase the disk quota of search artifacts for your role in authorize.conf. So naturally I go to the Job Manager to see what's up, but what I keep finding is that the Jobs don't even almost approach the quota. This is the second time this issue has come up. Previously I did a bunch of digging around but was never able to find any record of what was actually using up the quotas. That was a while back so unfortunately I don't have notes on that. I ended up just increasing the quota for the user's role and things started working again. Now that it's happening again, I figured I'd try posting here to see if anyone has any advice on how to find what's using up the quota.
Hi Team,                   Recently we have received a notification from Splunk to upgrade to Victoria Experience. And I want to know Is it good to upgrade to Victoria or else stay with the classic... See more...
Hi Team,                   Recently we have received a notification from Splunk to upgrade to Victoria Experience. And I want to know Is it good to upgrade to Victoria or else stay with the classic experience.??   Dear Splunk Cloud Platform Customer, We are reaching out to inform you of an upcoming maintenance window for Splunk to deliver a cloud migration for your stack: ***** This migration is known as the Victoria Experience. This is the message we received from Splunk team, does anyone have any idea regarding this classic and Victoria experience? Please do suggest which one is better to go with
All, I have an index with some fields like appId and responsetime. I also have a dataset where the appId is same, but in this file I have a propername linked with the appId So as example INDEX O... See more...
All, I have an index with some fields like appId and responsetime. I also have a dataset where the appId is same, but in this file I have a propername linked with the appId So as example INDEX OUTPUT appId, responsetime 202, 1200 OUTPUT file appId, serviceName 202, serviceA I am looking for a syntax where I can have the output: serviceA, responseTime 202, 1200 And on top of this, I want to create a chart out of this. I was playing around with a join query and was able to create a table index=xx | dedup appId | eval duration = RT - FT | join type=inner appId [|inputlookup tmpfile.csv | rename serviceA as URL] | table appId serviceA responsetime |where appId = appId BUT, I can not create charts with avg(responseTime).   Can someone help?   Thanks. Amit
Hi there, Is it possible to add static thresholds on the Gauge Widget? The Dash Studio does provide the thresholds but it does not provide the metric expression with variable declarations. Regards... See more...
Hi there, Is it possible to add static thresholds on the Gauge Widget? The Dash Studio does provide the thresholds but it does not provide the metric expression with variable declarations. Regards, Frans 
We're looking to create an alert based on the number of failures based on a certain field (clientIP) per certain time frame. here is the search so far: sourcetype="access_combined" POST 401 "/cas/l... See more...
We're looking to create an alert based on the number of failures based on a certain field (clientIP) per certain time frame. here is the search so far: sourcetype="access_combined" POST 401 "/cas/login" | stats count by clientip Basically, we only want to be alerted when the number of events from any unique clientIP hits 10 per minute.  We have the alert to trigger if the number of results is greater than 9.
My understanding is MLTK is an out of the box app. In that case if my Splunk instance is upgraded, will the oob apps (MLTK) will get upgraded automatically too ? Or we need to manually update to late... See more...
My understanding is MLTK is an out of the box app. In that case if my Splunk instance is upgraded, will the oob apps (MLTK) will get upgraded automatically too ? Or we need to manually update to latest version from Splunkbase like other apps ?
Hello Team, I am using Splunk REST API's to integrating Splunk with CPI. For Get token configuration details endpoint i am getting error. Please help me whether this URL is correct or wrong. Sp... See more...
Hello Team, I am using Splunk REST API's to integrating Splunk with CPI. For Get token configuration details endpoint i am getting error. Please help me whether this URL is correct or wrong. Splunk  document URL: https://localhost:8089//servicesNS/nobody/system/data/inputs/http/http%3A%252F%252F%22myapp%22/http/%252Fvar%252Flog What is this %3A%252F%252F%22myapp%22/http/%252Fvar%252Flog Please make me understand or share me the correct URL Thanks, Venkata
Hi all, I'm an infrastructure guy and have no clue about SPLUNK. The only thing I knew is that SPUNK is writing a lot of unaligned 4K IO. Also in some circumstances 512K sequential writes. Is there... See more...
Hi all, I'm an infrastructure guy and have no clue about SPLUNK. The only thing I knew is that SPUNK is writing a lot of unaligned 4K IO. Also in some circumstances 512K sequential writes. Is there any parameter that can be set the tell SPLUNK to write 4K aligned IOs? Thanks. Fred