All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have 2 files I want to monitor for in the same directory with 2 different sourcetypes. My issue is both files are being picked up by sourcetype a because of the wildcard. The wildcard is needed for... See more...
I have 2 files I want to monitor for in the same directory with 2 different sourcetypes. My issue is both files are being picked up by sourcetype a because of the wildcard. The wildcard is needed for the dates that follow the log name. I tried blacklisting the diagnostic file from sourcetype a but that did not work.     [monitor://E:\path\to\log\directory\HFMWeb*-diagnostic.log] sourcetype = <sourcetype b> disabled = false index = <index> crcSalt = <SOURCE> [monitor://E:\path\to\log\directory\HFMWeb*.log] sourcetype = <sourcetype a> disabled = false index = <index> crcSalt = <SOURCE> blacklist = \-diagnostic     Any ideas on how I can exclude the diagnostic file from sourcetype a but then include in sourcetype b?
I am trying to ingest a new log and unfortunately, it doesn't include year or time zone as part of the message. The timestamp in the messages is in the following format:   Jun 30 01:02:03 <msg>... See more...
I am trying to ingest a new log and unfortunately, it doesn't include year or time zone as part of the message. The timestamp in the messages is in the following format:   Jun 30 01:02:03 <msg>   I wrote the following props.conf settings to extract the timestamp in the message:   [new_sourcetype] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 15 TIME_FORMAT = %b %d %H:%M:%S    I see the following warnings under splunkd.log:   06-30-2022 14:05:59.555 -0600 WARN DateParserVerbose [1556614 merging] - The TIME_FORMAT specified is matching timestamps (Tue Jun 6 17:43:20 2023) outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context: source=/path/to/log|host=UF01|new_sourcetype|230    I'm confused where "(Tue Jun 6 17:43:20 2023)" is coming from because none of the logs have this string. How do I approach this? I've thought about using transforms to write into the DEST_KEY "_time" but I read that any key starting with "_" is not indexed. This data is being received from a syslog server so I thought about modifying the data as it's being received. What are you recommendations?
Hi  I have been thru the docs but don't see any specific detail regarding system hardware needed for Specific Roles.... Let's say you have a 3 member SHC and a 10 member IDXC. IF you were to pu... See more...
Hi  I have been thru the docs but don't see any specific detail regarding system hardware needed for Specific Roles.... Let's say you have a 3 member SHC and a 10 member IDXC. IF you were to put the deployer role on a separate box, what would you need at a minimum? (For instance > maybe 8 Cores, 12G Ram, 1OOG Disk) And what would you need for the following roles if each and every one was on a dedicated VM? License Manager Monitoring Console Index Cluster Master Deployment Server I find the documentation is more specific for SHs and Indexers, but per documentation >>> “For detailed sizing and resource allocation recommendations, contact your Splunk account team”. I am up against a HW resource wall, and need to shuffle some things around.  So any advice or lessons learned on the "minimum hardware requirements topic" are greatly appreciated.  
I want to run a query where: 1. Query1 returns resultset1containing myEvent1.uid 2. Query2 returns resultset2 containing myEvent2.uid which is a subset of the myEvent1uid values. 3. Filter myEv... See more...
I want to run a query where: 1. Query1 returns resultset1containing myEvent1.uid 2. Query2 returns resultset2 containing myEvent2.uid which is a subset of the myEvent1uid values. 3. Filter myEvent1 events and discard any that don't have a matching myEvent2.uid. This can be done easily with an inner join but the result2 dataset is larger than 50k so I cannot use a join. What I want is to do an inner join without using join!    (I'm also practicing not using join, in general, but I really can't use join in this case.) Saw some other posts that use join and other tricks and tried different solutions with coalesce() and also creating a new fields but haven't figured out a way that worked. Thanks in advance!  
Hi I have a table similar to this: Brand ID_EMP Nike 123 Adidas 456 Lotto 123   other table like this: code name 123 Smith 456 Myers   The result should be Nike 123 Smith  A... See more...
Hi I have a table similar to this: Brand ID_EMP Nike 123 Adidas 456 Lotto 123   other table like this: code name 123 Smith 456 Myers   The result should be Nike 123 Smith  Adidas 456 Myers  Lotto 123 Smith     but insted of this I'm getting: Nike 123 Smith john adidas 456 Myers. Mike   This is the query that I'm using. | dbxquery query="SELECT * FROM Clients ;" connection="Clients" | where NOT like(SAFE_NAME,"S0%") |rename ID_EMP as code | join type=left [|  inputlookup append=t Emp.csv | table code, name, STATUS ]| fillnull value="Blank" STATUS | dedup code | where STATUS!="Blank"
Hello, Are there any ways we can hide (encrypt) the USERNAME and PASSWORD in our REST API> The main reason of that our client doesn't disclose the username and password. Any help or recommendation ... See more...
Hello, Are there any ways we can hide (encrypt) the USERNAME and PASSWORD in our REST API> The main reason of that our client doesn't disclose the username and password. Any help or recommendation would be highly appreciated. Thank you so much.
Hi everyone, Newbie here again. Is it possible to have query similar to the LIKE function in SQL where MM/DD/YYYY in Splunk to set different MAX VALUE and INTERVAL for the Y-Axis? Would it also... See more...
Hi everyone, Newbie here again. Is it possible to have query similar to the LIKE function in SQL where MM/DD/YYYY in Splunk to set different MAX VALUE and INTERVAL for the Y-Axis? Would it also be possible just to use the Month and Year for the condition to take effect so that regardless what Date the user may enter, it would not effect the result?  
Is anyone using the same TLS internally signed certificate for the host and web?  If so, is there anything additive to the cert process I should know while setting something like that up?
I am wondering if one can relate extracted fields to a specific source rather than a source type. The event structure that comes in changes based on the source even though they are the same source ty... See more...
I am wondering if one can relate extracted fields to a specific source rather than a source type. The event structure that comes in changes based on the source even though they are the same source type and so one extracted field will get the correct value and another will not if I don't specify the source I am looking for. I also have the problem where events of the same host, source, and source type will have different event structures from event to event. I am not sure how to extract the fields from this situation either.  Any suggestions are much appreciated.
I am upgrading splunk UF from 8.0.5 to V 9.0.0 in my all linux flavours (rhel,6,7,8 ,amz 2018,2 and cent7).It got installed properly except rhel6 and amazon-2018. When I am trying to execute below co... See more...
I am upgrading splunk UF from 8.0.5 to V 9.0.0 in my all linux flavours (rhel,6,7,8 ,amz 2018,2 and cent7).It got installed properly except rhel6 and amazon-2018. When I am trying to execute below command through automated script it got hung but surprisingly when I execute same command from tty ,its working fine. I used both shell (sh and bash) in my shebang.   I found few hung child  processes when I did ps -eaf | grep splunk "/opt/splunkforwarder/bin/splunk start --accept-license --no-prompt"
Hi All, I have two set of logs in two different sources in splunk, one containing the predefined list of VPNs and Queues and the other containing list of VPNs and Queues where activity is done. Now... See more...
Hi All, I have two set of logs in two different sources in splunk, one containing the predefined list of VPNs and Queues and the other containing list of VPNs and Queues where activity is done. Now I have to compare the list of the 2nd set with the 1st set and create a table. Set 1: log:  10.96.195.70/SEMP/v2/monitor/msgVpns/CPGC_S_SIT/queues/gcg.apac.eventcloud.publish.hk.twa/txFlows 10.96.195.70/SEMP/v2/monitor/msgVpns/EHEM_S_SIT/queues/gcg.emea.eventcloud.publish.ae.cops/txFlows 10.96.195.70/SEMP/v2/monitor/msgVpns/EHEM_S_SIT/queues/gcg.emea.eventcloud.publish.ae.sendprioritycommandretrycomm/txFlows 10.96.195.70/SEMP/v2/monitor/msgVpns/EHEM_S_SIT/queues/gcg.emea.eventcloud.publish.ae.triggerbatchtocommhub/txFlows 10.96.195.70/SEMP/v2/monitor/msgVpns/ALERTSGC_S_SIT/queues/gcg.ap.cop_163124.card.lightning.eas.queue So I created the below query to get the VPN & Queue: *** | source="*/sit_solace_updated.txt" | rex field=_raw max_match=0 "msgVpns\/(?P<VPN_Name>[^\/]+)\/queues\/(?P<Queue_Name>[^\/]+)\/" | table VPN_Name,Queue_Name | eval row=mvrange(0,mvcount(VPN_Name)) | mvexpand row | foreach *_* [| eval <<FIELD>>=mvindex(<<FIELD>>,row)] | fields - row Set 2: log1:  "activationTime":1652444666, "clientName":"source.solace.emea.ae.custmgt.cops.cops.raw.int.rawevent-05d1d1b@gtgcb-csrla01s.nam.nsroot.net/59133/#00a00005/VIzOONZsrL", "msgVpnName":"EHEM_S_SIT", "queueName":"gcg.emea.eventcloud.publish.ae.cops", log2: "activationTime":1650620233, "clientName":"BW-ALERTS_JMS_CONN-queue-ALERTSServices-1-1-AlertServices_ALERTSServices_a37s_01", "msgVpnName":"ALERTSGC_S_SIT", "queueName":"gcg.ap.cop_163124.card.lightning.eas.queue", And here I used the below query to create a table: *** | source="*/final_sol_queue.txt" | rex field=_raw "Time\"\:(?P<Act_Time>[^\,]+)\," | rex field=_raw "clientName\"\:\"(?P<Client_Name>[^\"]+)\"\," | rex field=_raw "VpnName\"\:\"(?P<VPN_Name>[^\"]+)\"\," | rex field=_raw "queueName\"\:\"(?P<Queue_Name>[^\"]+)\"\," | eval Activation_Time=strftime(Act_Time,"%a, %d %b %Y %H:%M:%S") | table Activation_Time,Client_Name,VPN_Name,Queue_Name | dedup Activation_Time,Client_Name,VPN_Name,Queue_Name it gave the below table: Activation_Time Client_Name VPN_Name Queue_Name Fri, 22 Apr 2022 15:28:39 BW-ALERTS_JMS_CONN-queue-ALERTSServices-1-1-AlertServices_ALERTSServices_a37s_01 ALERTSGC_S_SIT gcg.ap.cop_163124.card.lightning.eas.queue Fri, 13 May 2022 17:54:26 source.solace.emea.ae.custmgt.cops.cops.raw.int.rawevent-05d1d1b@gtgcb-csrla01s.nam.nsroot.net/59133/#00a00005/VIzOONZsrL EHEM_S_SIT gcg.emea.eventcloud.publish.ae.cops However, I want to create a table that contains the missing VPNs & Queues also from set1 as below: Activation_Time Client_Name VPN_Name Queue_Name Fri, 22 Apr 2022 15:28:39 BW-ALERTS_JMS_CONN-queue-ALERTSServices-1-1-AlertServices_ALERTSServices_a37s_01 ALERTSGC_S_SIT gcg.ap.cop_163124.card.lightning.eas.queue Fri, 13 May 2022 17:54:26 source.solace.emea.ae.custmgt.cops.cops.raw.int.rawevent-05d1d1b@gtgcb-csrla01s.nam.nsroot.net/59133/#00a00005/VIzOONZsrL EHEM_S_SIT gcg.emea.eventcloud.publish.ae.cops Not_Available Not_Available EHEM_S_SIT gcg.emea.eventcloud.publish.ae.triggerbatchtocommhub Not_Available Not_Available EHEM_S_SIT gcg.emea.eventcloud.publish.ae.sendprioritycommandretrycomm Not_Available Not_Available CPGC_S_SIT gcg.apac.eventcloud.publish.hk.twa   Please help to modify the query in a way that it compares set1 and set2 for the VPN & Queue values and produce the table in the above manner. Thank you All..!!
Hi everybody,  I am trying to study for the Splunk IT Service Intelligence Certified Admin exam, but don't currently have the means for the paid online courses. Are there any other free resources t... See more...
Hi everybody,  I am trying to study for the Splunk IT Service Intelligence Certified Admin exam, but don't currently have the means for the paid online courses. Are there any other free resources that would prepare me for the exam besides the free courses offered by Splunk? 
In going through the SplunkCloud SPL tutorial, we are told to upload California drought data into Splunk, and we create a Dashboard from it.  That worked just as explained, but the next day the data ... See more...
In going through the SplunkCloud SPL tutorial, we are told to upload California drought data into Splunk, and we create a Dashboard from it.  That worked just as explained, but the next day the data is gone.  I was using the "AllTime" filter, so it was not that I missed data that was getting older, and skipped by filters. source="us_drought_monitor.csv" State = CA date_year=2018| rex field=County "(?<County>.+) County"| eval droughtscore = D1 + D2*2 + D3*3 + D4*4| stats avg(droughtscore) as "2018 Drought Score" by County| geom ca_county_lookup featureIdField=County above is the search SPL for the demo.  While I have your attention, in the tutorial they add a min and max function, ""2018 Drought Score" max(droughtscore) as "Max 2018 Drought Score" min(droughtscore) as "Min 2018 Drought Score" by County"  This provided SPL code broke the Dashboard yesterday when it was working. is there something wrong with this SPL, that was provided?
What parameter can i modify in limits.conf to solve that? The percentage of non high priority searches delayed (80%) over the last 24 hours is very high and exceeded the red thresholds (20%) on t... See more...
What parameter can i modify in limits.conf to solve that? The percentage of non high priority searches delayed (80%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=13378. Total delayed Searches=10799
Hi, I have below sample event. All field values are getting extracted fine using Splunk's auto extraction. However, some fields are not getting extracted correctly and they get extracted partially.... See more...
Hi, I have below sample event. All field values are getting extracted fine using Splunk's auto extraction. However, some fields are not getting extracted correctly and they get extracted partially. These fields are FindingDetails and FindingDescription highlighted below. How can I get auto extraction in place to extract them all using rex OR how can I extract only these two fields at search time with two separate regexes? All fields are seperated by comma in raw event below and all values are encapsulated in double quotes. Feed is setup via DBConnect running SQL against a DB table and data gets in as is. Splunk Enterprise is @ 8.x. Thanks in advance!!   2022-06-21 10:29:05.000, ID="1234567890", System="SAMPLE", GSystem="SAMPLE", Environment="1 PROD", Datasource="SAMPLE", DBMSProduct="ORACLE", FindingType="Fail", SeverityCode="2 HIGH", SeverityScore="8.0", TestID="1234", TestName="SAMPLE", TestDescription="Application users privileges should be restricted to assignments using application user roles. Granting permissions to accounts is error prone and repetitive. Using roles allows for group management of privileges assigned by function and reduces the likelihood of wrongfully assigned privileges. We recommend assigning permissions to roles and then grant the roles to accounts. This test excludes grantees from a predefined Guardium group called "Oracle exclude default system grantees", APEX_%, ANONYMOUS and PUBLIC grantees. It also excludes grantees for tables SAMPLE and SAMPLE, excludes the DBMS_REPCAT_INTERNAL_PACKAGE table and table name like '%RP'. To exclude certain grantees, you can create an exception group, populate it with authorized grantees and link your group to this test.", FindingDescription="Application users privileges are not restricted to assignments using application user roles. Including 205 items present in test detail exceptions.", FindingDetails="The following users have direct privileges on the tables: Grantee = SAMPLE : Privilege = DELETE : Owner = SAMPLE: Object_name = SAMPLE", TestResultID="123456789", RemediationRule="295", RemediationAssignment="TEST", RemediationAnalysis="Finding passes in the Baseline Configuration. Project needs to initiate tickets in coordination with their DBA to remediate. Specifically a Role needs to be created and the permission flagged in this test added to the role, the user should then be granted to the role and the original permission removed from the user. This is a finding even if documented in the XYZ.", RemediationGuidance="Application users privileges should be restricted to assignments using application user roles. We recommend revoking privileges assigned directly to database accounts and assigning them to roles based on job functions. You can use the following command to revoke privileges: revoke <privilege> on <object name> from <user name>; To exclude certain grantees, you can create an exception group, populate it with authorized grantees and link your group to this test.", ExternalReference="STIG_Reference : STIG_SRG", VersionLevel="19", PatchLevel="19.15.0.0.0", Reference="123", VulnerabilityType="PRIV", ScanTimestamp="2022-06-21 06:29:05.0000000", FirstExecution="2022-01-04 06:38:25.0000000", LastExecution="2022-06-21 06:29:56.0000000", CurrentScore="Fail", CurrentScoreSince="2022-01-04 06:38:25.0000000", CurrentScoreDays="168", Account="TEST", AcknowledgedServiceAccount="Yes", SecurityAssessmentName="TEST", CollectorID="123456789045444444", ScanYear="2022", ScanMonth="6", ScanDay="21", ScanCycle="2", Description="TEST", Host="abcdef.sample.net", Port="0000", ServiceName="SAMPLE"  
Currently we use splunk cloud.  I've added the following into the advanced section of the "Edit Source Type" to mask some fields coming in from auth0: s/\"user_name\"\:\"[^\"]+\"/\"user_name\":\"##... See more...
Currently we use splunk cloud.  I've added the following into the advanced section of the "Edit Source Type" to mask some fields coming in from auth0: s/\"user_name\"\:\"[^\"]+\"/\"user_name\":\"###############\"/g However, while this masks the data in the _raw json, it doesn't appear to mask the data in the data.user_name event dropdown.  See below: data.user_name john doe ########## My question is, is a heavy forwarder setup necessary to mask this data in the events as well?  Thank you.
We had an issue where for a 3 week time our Splunk Cloud Cluster was out of sync.  During that time knowledge objects worked intermittently.  We didn't know at the time that this also meant our field... See more...
We had an issue where for a 3 week time our Splunk Cloud Cluster was out of sync.  During that time knowledge objects worked intermittently.  We didn't know at the time that this also meant our field extractions were also intermittently working so during  that 3 week period our alerts and searches gave faulty/inconsistent results. Splunk told us that health checks for infrastructure, like cluster health, is the responsibility of the customer and not Splunk.  While it is a managed service they are only responsible for upgrades and fixing the cluster if it is down. So what health checks are you using?  I really want to focus on identifying issues that will causes faulty search results but anything we should be alerting on will help.  Currently we are now looking in the logs for issues like: The captain does not share common baseline with * member(s) in the cluster * is having problems pulling configurations from the search head cluster captain   Thank you!
2022-06-12 21:51:42.274 threadId=L4C9D6WIYK2K eventType="RESPONSE" data="<TestRQ>sometestdata</TestRQ>" 2022-06-12 21:51:41.274 threadId=L4C9D6WIYK2K eventType="REQUEST" data="<TestRQ>sometestdata</... See more...
2022-06-12 21:51:42.274 threadId=L4C9D6WIYK2K eventType="RESPONSE" data="<TestRQ>sometestdata</TestRQ>" 2022-06-12 21:51:41.274 threadId=L4C9D6WIYK2K eventType="REQUEST" data="<TestRQ>sometestdata</TestRQ>" 2022-06-12 21:51:40.274 threadId=L4C9D6WIYK2K eventType="HEADER" data="clientIP=101.121.22.11" Hello Team, I have the series of events as shown above and if you see one of the event having eventType="HEADER" I have clientIP in data field . I need to fetch REQUEST and RESPONSE events based on clientIP mentioned in third event of HEADER. Common UNIQUEID between all 3 events is threadID , How can I achieve this in splunk query ? new to splunk i am just good in basic searches. index= test eventType="HEADER" clientIP=101.121.22.11------>> and pass on the threadID to fetch the eventType="REQUEST" eventType="RESPONSE"   @ITWhisperer 
Are there any recommendations on how best to ingest TLS encrypted syslog from AirWatch Cloud Console?  The only settings in AirWatch are to select UDP, TCP or SECURETCP and the port.  I opened a case... See more...
Are there any recommendations on how best to ingest TLS encrypted syslog from AirWatch Cloud Console?  The only settings in AirWatch are to select UDP, TCP or SECURETCP and the port.  I opened a case with VMWare and they suggested to open a case with our SIEM provider for recommedations.  I suspect the recommendation may be to send logs to an rsyslog and then use forwarder to send to splunk since that supports SSL to Splunk.  Has anyone successfully done Encrypted Syslog from Airwatch?
If I remove edit_user as a capability from a Splunk role, will it impact their ability to edit their user preferences?  Is editing user preferences linked to any role capabilities in the first place?