All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a unique query that I think I have a general logical approach to solving, but the syntax and most efficient route is TBD>>  Case to solve is this: Users are assigned positions in an applica... See more...
I have a unique query that I think I have a general logical approach to solving, but the syntax and most efficient route is TBD>>  Case to solve is this: Users are assigned positions in an application, each position is unique.  Positions are assigned security groups that are mapped to roles. We are versioning this mapping into splunk for two reasons.  1 to be able to rewind and show who was in what groups so that we can do whatif scenarios 9 months back without trying to figure out what has changed etc. and 2.  We want to analyze overlap in positions to roles to help simplify where necessary.  The latter is the basis of my question. I have a table created off a makemv/mvexpand that creates a cube of data that has  Position, GroupName There are say 99 unique positions and 70 unique security groups. Expanded I have just north of 1200 permutations of them Position1, SecGroup1 Position1, SecGroup2 Position2, SecGroup2 Position2, Secgroup5 Position3, SecGroup1 Position4, SecGroup2 Etc What I need to do is create stats on the relationship of overlap where positions are in similar groups> I know for instance that in my current data set that ALL positions are in SecGroup1 and 68/99 are in SecGroup2 This is easily calculated for one group, but how do I extend this out at scale for all group? I am thinking of creating a deduplicated list of security groups, and creating a full list of all combinations of  (SecGroup1 AND SecGroup2) OR (SecGroup1 AND SecGroup3)  until that goes in reverse and deduplicating that list and using that list as a subsearch to my raw data and then running stats on it that I would think in theory would show where two PD's overlap because of the same two groups? Is there a more succinct way of doing this?  Can one create this list with | foreach to a foreach to create this list?  How in splunk can one calculate a list of permutations and force an AND between them as a part of a subquery?
I have an index that snapshots an inventory system every day.  The inventory is a list of all active circuits.  There is a timestamp and date of when the snapshot was taken, plus other details.  Only... See more...
I have an index that snapshots an inventory system every day.  The inventory is a list of all active circuits.  There is a timestamp and date of when the snapshot was taken, plus other details.  Only active circuits are included.   I also have a lookup file where we tried to install a new piece of equipment.  This has the date and time of when we tried to install, which circuit we tried to install on and if it was successful or not.   I'm trying to join the lookup to the index where the date in the index is the day prior to the date of the installation.  I only want 1 day prior, not closest date matching.  Time is not important, only the date.   Here's my search so far:   index="myindex" sourcetype="mysource"| fields identifier_1 identifier_2 |eval active_date=strftime(load_date, "%m/%d/%Y") | join type=inner "identifier_1" [|inputlookup mylookup.csv | rename ID_1 as identifier_1| eval fail_date=strftime(EVENT_TS, "%m/%d/%Y")| where active_date=fail_date-1] I'm sure that this is possible, but I'm getting errors.  Any help or suggestions would be appreciated
Hello Everyone. I'm trying to find a way to use the eval command to determine whether or not a field in my stats table has more than one value.  Here is the scenario......I have two columns - IP Addr... See more...
Hello Everyone. I'm trying to find a way to use the eval command to determine whether or not a field in my stats table has more than one value.  Here is the scenario......I have two columns - IP Address in column A and userID in column B. The userID field may have more than one value. I'd like to evaluate the UserID field in each line to determine if there is more than one UserID listed. If there is, I'd like the eval command to place a value of 1 in column C, and if only one userID, to place a value of 0 in column C. Any help would be very appreciated. Thanks!
Hi all, I am new to Splunk and am trying to look for logs that indicate that the SplunkD service shutdown. I am trying this, but I am not sure if there's a better one:         index=_internal so... See more...
Hi all, I am new to Splunk and am trying to look for logs that indicate that the SplunkD service shutdown. I am trying this, but I am not sure if there's a better one:         index=_internal sourcetype="splunkd" keywords "*shut"      
I'm seeing an authentication failure for the SavedSearchFetcher in all of my SHC members logs repeating every 30 seconds, as follows: 10-28-2022 12:52:41.505 +0000 ERROR UserManagerPro [63886 SavedS... See more...
I'm seeing an authentication failure for the SavedSearchFetcher in all of my SHC members logs repeating every 30 seconds, as follows: 10-28-2022 12:52:41.505 +0000 ERROR UserManagerPro [63886 SavedSearchFetcher] - Did not find any info for user=<user redacted> 10-28-2022 12:52:41.726 +0000 INFO AuthenticationProviderSAML [63886 SavedSearchFetcher] - Calling authentication extension function=getUserInfo() for user=<user redacted> 10-28-2022 12:52:42.426 +0000 ERROR AuthenticationProviderSAML [63886 SavedSearchFetcher] - Authentication extension function=getUserInfo() returned status=failed for user=<user redacted> 10-28-2022 12:52:42.426 +0000 ERROR AuthenticationProviderSAML [63886 SavedSearchFetcher] - Error message from function=getUserInfo() : Unable to get user info for username=<user redacted>. This script only officially supports querying usernames by the User Principal Name, Object ID, or Email properties. To use other user properties, use the 'azureUserFilter' argument and search the Microsoft documentation for a full list of properties: "user resource type - Microsoft Graph v1.0" / "Properties" The <user redacted> does not exist in our SHC nor is there such a user in our SSO system that is supplying the SAML response to our authentication extension.  We have 100's of users and 100's of saved searches, alerts, and reports running and this is the only occurrence of this situation.   So I have two questions that I cannot answer from my investigation of the logs: How can I find the source of these SavedSearchFetcher calls to the authentication extension?  Before you say look in your saved searches, remember the <user redacted> does not exist on the SHC nor in our SSO IdP.  I have looked at all SHC members ~/etc/user directories for a user that matches <user redacted> but that doesn't exist either.  Bottom line, there are no saved searches for <user redacted>.  I've also searched all the savedsearches.conf files for the <user redacted> string (e.g deprecated userid setting) and there are none of those either.  I've looked at the splunkd.log file before and after these logs (INFO level logging) and there is no help which makes some sense because this is an authorization failure, so nothing more should be happening.    What other ways trigger the SavedSearchFetcher other than a REST call or a scheduled search?  I have correlated the logs for REST calls to these logs and there is nothing matching the frequency.  And as stated above I cannot fine a scheduled search that matches either.  So where else could this be coming from? Thanks in advance for your help with this.  
Hello again community Today I received notice that on every Friday morning at a particular time there are a lot of new sessions registered in the firewall log, apparently caused somehow by Splunk. ... See more...
Hello again community Today I received notice that on every Friday morning at a particular time there are a lot of new sessions registered in the firewall log, apparently caused somehow by Splunk. The question was passed down, why? So I played around with the metrics log, input/output etc. Though I cannot se any corelated increase or decrease in the numbers observed around the same time. What I ended up with was alterations of   index=_internal source=*metrics.log group=tcp<in|out>_connections | timechart count by host useother=false   My question, is this a reasonable approach? Otherwise, what would be a better search to get the number of newly established connections between members of the Splunk infrastructure to figure out if any components are establishing a higher number of new connections? All the best  
Team,  We are trying to use Splunk DB connect app's "Outputs" option to insert data in MySQL table.  For this we are using DBX output alert action  in Splunk alert and calling the "Output" from  Sp... See more...
Team,  We are trying to use Splunk DB connect app's "Outputs" option to insert data in MySQL table.  For this we are using DBX output alert action  in Splunk alert and calling the "Output" from  Splunk db connect app. Followed the below steps: 1) Created a new Output named "insertdb_cronjob" under "Outputs" Option in Splunk db connect app 2) Created an alert and   3) Added the trigger actions as DBX output alert action where its calling  "insertdb_cronjob" to insert data into MySQL table. 4) Manually execution is working fine but when the alert is triggered, getting below error in internal logs: 10-28-2022 12:00:01.960 +0530 INFO  sendmodalert [3425498 AlertNotifierWorker-0] - Invoking modular alert action=alert_output for search="si_alert" sid="rt_scheduler__admin_c3BsdW5rX2FwcF9kYl9jb25uZWN0__RMD5f6c0cb1e9f4fe73f_at_1666933094_23674.0" in app="splunk_app_db_connect" owner="admin" type="saved" 10-28-2022 12:00:02.065 +0530 ERROR sendmodalert [3425498 AlertNotifierWorker-0] - action=alert_output STDERR -  Error: Could not find or load main class com.splunk.dbx.command.DbxAlertOutput 10-28-2022 12:00:02.068 +0530 INFO  sendmodalert [3425498 AlertNotifierWorker-0] - action=alert_output - Alert action script completed in duration=105 ms with exit code=1 10-28-2022 12:00:02.068 +0530 WARN  sendmodalert [3425498 AlertNotifierWorker-0] - action=alert_output - Alert action script returned error code=1   Can you please suggest if we are missing anything. Thanks, Dinesh
Hello, I have a corrupted warm bucket. What I am trying to do is to find out is the time interval of the events stored in this bucket. I found the file buckt_info.csv where I have _indextime_et tha... See more...
Hello, I have a corrupted warm bucket. What I am trying to do is to find out is the time interval of the events stored in this bucket. I found the file buckt_info.csv where I have _indextime_et that I assume is indextime earliest which means the time the 1st event of the bucket was indexed, right? how can I find the time range of events in a bucket? in other words, is there a way to find the 1st event indexed in a backet and the last one? any help will be appreciated. thank you  
In my SPL I use the associate command.  However, I've noticed that when I use the command, any previous preliminary search results before the associate command are not available for use after the com... See more...
In my SPL I use the associate command.  However, I've noticed that when I use the command, any previous preliminary search results before the associate command are not available for use after the command.  Why is that and how can I save earlier search results until after the associate command for use?
We want to use the Splunk Add-on for Microsoft Cloud Services for the ingestion of data from the Azure Active Directory. For this we send data from our LogAnalyticsWorkspace (and an EventHub) to the ... See more...
We want to use the Splunk Add-on for Microsoft Cloud Services for the ingestion of data from the Azure Active Directory. For this we send data from our LogAnalyticsWorkspace (and an EventHub) to the Splunk TA (which is working). Unfortunately, the documentation is not very precise about the source types to use. Which source types can I use for the following data? Sign-ins (azure:monitor:aad ?) Audit data (azure:monitor:aad ?) AAD Risky Users User Risk Events? Thanks.
Hi Community,   I have the below search query     index=_internal [ `set_local_host`] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval... See more...
Hi Community,   I have the below search query     index=_internal [ `set_local_host`] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | search h = hp742srv OR dell970srv OR dell428srv OR hp548srv OR dell429srv OR dell477srv OR dell433srv | timechart span=1d sum(b) AS volumeB by idx fixedrange=false limit=30     I am trying to refine the search query where I had to manually enter the host names using the OR condition. I am trying to figure out if there is a way I could use an alternative way to get the same result from the above search. The below search gives all the names used in the search command above.     index=m_logs "mx.env"="hp742srv.scz.m.com:24000" | table host | dedup host      Is there a way I could combine the results from the above query with the first query to refine the search command? Thanks in advance.   Regards, Pravin
 Trying to use the opentelemetry collector opentel-contrib to collect and push metrics into appdynamics. And I get the 403 forbidden in the debug log when calling  the URL https://pdx-sls-agent-api.s... See more...
 Trying to use the opentelemetry collector opentel-contrib to collect and push metrics into appdynamics. And I get the 403 forbidden in the debug log when calling  the URL https://pdx-sls-agent-api.saas.appdynamics.com/v1/metrics  Checked the following so far:   service.namespace  = the name of the application  service.name = name of the Tier I've created in in AppDynamics In the confing.yml I have the following set up exporters:   otlphttp:     endpoint: "https://pdx-sls-agent-api.saas.appdynamics.com"     headers: {"x-api-key": "<key_copied_from_the_otel_page_in_appdynamics>"}   logging:     loglevel: debug  Any tips on where to go next?  Is there any documentation on which endpoint exists and does the type of the Tier affect anything? br Kjell
Hi All, I am trying to add severity column to output of first command, could you please let me know how to do it. Query I have created is : index=abc source=xyz | table _time ID STATUS ERROR_Na... See more...
Hi All, I am trying to add severity column to output of first command, could you please let me know how to do it. Query I have created is : index=abc source=xyz | table _time ID STATUS ERROR_Name | search ERROR_Name IN ("EndDate must be after StartDate""The following is required: PersonName" ....many others) | join type=inner ID[search index=abc source=xyz STATUS IN (FATAL,SUCCESS) | table _time ID STATUS | stats latest(STATUS) as STATUS by ID | search STATUS IN (FATAL) | fields ID] | stats latest(STATUS) as STATUS by ID ERROR_Name | search STATUS IN (FATAL) | top 50 ERROR_Name | appendcols [| eval severity = case(ERROR_Name=="EndDate must be after StartDate", "One", ERROR_Name=="The following is required: PersonName", "two")]
Hello, I'm new here, tried to find the answer for my problem by failed. I'm looking for a method to extract values from 2 different events. These events have some common fileds but I'm not intereste... See more...
Hello, I'm new here, tried to find the answer for my problem by failed. I'm looking for a method to extract values from 2 different events. These events have some common fileds but I'm not interested in them being part of output. My events have following fields (there are more, but these I would like to operate on): EventID=10001 time=_time user=mike vlan=mikevlan EventID=10002 time=_time user=mike L2ipaddress=1.2.3.4 What I'm looking at as a result is a table with a combined results from vlan and L2ipaddress columns for which user and time matches then I need to have a list of all vlans grouped by L2ipaddress 1.2.3.4|mikevlan,tomvlan,anavlan 1.2.3.5|brianvlan,evevlan etc Any ideas?
My dashboard panel won't work, even after changing input values , it will always say 'waiting for input'. I am unable to figure out if it is I am passing the tokens incorrectly or there is some other... See more...
My dashboard panel won't work, even after changing input values , it will always say 'waiting for input'. I am unable to figure out if it is I am passing the tokens incorrectly or there is some other issue. Could use some help.
Hi Splunkers. I'm trying to extract fields from Windows DNS debug logs but running into extraction issues for some events. Most events the fields extract o.k. I'm finding for some events, the re... See more...
Hi Splunkers. I'm trying to extract fields from Windows DNS debug logs but running into extraction issues for some events. Most events the fields extract o.k. I'm finding for some events, the regex is returning more than it should in the field. i.e. returns the field plus the remaining text in the raw event. Works for most events extracting the domain correctly as, for example. (3)web(4)site(5)again(3)net(0) but when it fails, it extracts the questionname filed as (3)web(4)site(5)again(3)net(0) plus the remaining text to the end of the event. Regex in use is straight out of the Splunk TA for Windows from props.conf: ] (?<questiontype>\w+)\s+(?<questionname>.*) Sample data: ------- 28/10/2022 12:29:22 PM 07AC PACKET 1234523DDF690A11 UDP Snd 10.20.222.111 54c5 R Q [8081 DR NOERROR] A (3)web(4)site(5)again(3)net(0) UDP response info at 1234523DDF690A11 Socket = 736 Remote addr 10.20.222.111, port 62754 Time Query=20130697, Queued=0, Expire=0 Buf length = 0x0200 (512) Msg length = 0x0054 (84) Message: XID 0x54c5 Flags 0x8180 QR 1 (RESPONSE) OPCODE 0 (QUERY) AA 0 TC 0 RD 1 RA 1 Z 0 CD 0 AD 0 RCODE 0 (NOERROR) QCOUNT 1 ACOUNT 2 NSCOUNT 0 ARCOUNT 0 QUESTION SECTION: [snipped for brevity] -------- If I use the regex from the props.conf above in a REX command via SPL, the field is extracted correctly. The same regex also works fine in regex101 etc. (with the same event causes the issue used as test data) Can anyone explain why the regex works differently when used in props.conf than in direct SPL, and where I should be looking? As mentioned above, issue only occurs for some events.  Note that DNS events are both single line and multi-line, with only some multi-line having the issue.   Thanks in advance.
Hi all. I'm new to Splunk Cloud so I installed JIRA Cloud Add-on for Splunk Cloud by following this steps. But, when I do search based on the index that I configured on step 4, I found no results fou... See more...
Hi all. I'm new to Splunk Cloud so I installed JIRA Cloud Add-on for Splunk Cloud by following this steps. But, when I do search based on the index that I configured on step 4, I found no results found. And when I try to do this, it shows 'Unknown search command 'jira''. Did I miss something here? Please kindly help. Thank you so much.  
Hello Team, I want to implement pool enforcement policies in Splunk. Please suggest how can I proceed, if any available documents have , then share with me. Implement pool enforcement 1. high_p... See more...
Hello Team, I want to implement pool enforcement policies in Splunk. Please suggest how can I proceed, if any available documents have , then share with me. Implement pool enforcement 1. high_perf pool 2. limited_perf pool 3. standard_perf pool
We re-routed data from Splunk SaaS cloud to On-perm but we see event mismatch between these two instances, if I route the date to Splunk cloud all the sudden the event count increases but when I re-r... See more...
We re-routed data from Splunk SaaS cloud to On-perm but we see event mismatch between these two instances, if I route the date to Splunk cloud all the sudden the event count increases but when I re-route the same data source to on-perm drastically the event count comes down for the same time period but don't see any error in the FW.  Fw-->Splunk SaaS more count. Fw-->Splunk On-Perm less count.  Please find the screenshot and let me know what would be the issue and troubleshoot to fix this count mismatch. Thanks.