All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

My requirement is to utilize the results of the sub-search and use it with the results of the main search results, but the sourcetype/source is different for the main search and sub-search, Im not ge... See more...
My requirement is to utilize the results of the sub-search and use it with the results of the main search results, but the sourcetype/source is different for the main search and sub-search, Im not getting the excepted results when using format command or $field_name, inputlookup host.csv - consists of list of hosts to be monitored main search  index=abc source=cpu sourcetype=cpu CPU=all [| inputlookup host.csv ] | eval host=mvindex(split(host,"."),0) | stats avg(pctIdle) AS CPU_Idle by host | eval CPU_Idle=round(CPU_Idle,0) | eval warning=15, critical=10 | where CPU_Idle<=warning | sort CPU_Idle sub-search [search index=abc source=top | dedup USER | return $USER]
Hey Splunkers,   I have the following search but it is not working as expected. What I am trying to achieve is if one of the conditions matches I will table out some fields. condition 1 : if us... See more...
Hey Splunkers,   I have the following search but it is not working as expected. What I am trying to achieve is if one of the conditions matches I will table out some fields. condition 1 : if user_action="Update*" OR  Condition 2:  within each 5 min bucket, if any user has access more than 400 destination in the same index, index1 but it doesn't work. How can I check both condition on the same search?  Thanks in advanced!     index=index1 ``` condition 1 ``` ( user_action="Update*" ) OR ``` condition 2 ``` ( [search index=index1 NOT user IN ("system*", "nobody*") | bin _time span=5m | stats values(dest) count by _time, user | where count > 400 ] ) | table _time, user, dest      
Hello all, We are starting to integrate spunk into our systems, and in order to make sure everything goes smoothly we want to write a PowerShell script for the installation. We use Splunk Cloud, ... See more...
Hello all, We are starting to integrate spunk into our systems, and in order to make sure everything goes smoothly we want to write a PowerShell script for the installation. We use Splunk Cloud, so we are unsure if there is a way to set a PowerShell script to install it across our systems. We would like guidance where possible.   Thank you
Hello All, Basic questions on using table row highlighting 1. Do I need to have an "app" to use the various java scripts, and jquery scripts, etc 2. a quote from the instal area, "you need to s... See more...
Hello All, Basic questions on using table row highlighting 1. Do I need to have an "app" to use the various java scripts, and jquery scripts, etc 2. a quote from the instal area, "you need to save the libraries directly into your app directory. " I do not have an "app" in this path: $SPLUNK_HOME/etc/apps/[YourApp]/appserver/static/table_cell_highlighting.js I see were all these scripts and libraries are stored in the simple_xml_examples path. I only have a dashboard which returns/shows a list of users and hosts, and whether or not the user is loggoed on. It is just a dashboard not an "app" There is a "search" app in the path above: $SPLUNK_HOME/etc/apps, would I create a directory there called "my_app" , or something like that, and put all the jquery scripts and the css and the js. in that directory. The install process for the dashboard Examples is not really clear to me. Thanks, eholz1    
Hi all, i have below query index=advcf request=* host=abgc host=efgh host=jhty host=hjyu host=kjnbh here i want the email alert to trigger when data is not coming from... See more...
Hi all, i have below query index=advcf request=* host=abgc host=efgh host=jhty host=hjyu host=kjnbh here i want the email alert to trigger when data is not coming from any one of the hosts. and i want to see that host name in a table format in the mail. how can i do that????
I do now that you need a Deployer to send apps to a Search Head Cluster, but there are one thing I do not find any answer to,  can I send apps from the Deployment Server to the Deployer? Found this... See more...
I do now that you need a Deployer to send apps to a Search Head Cluster, but there are one thing I do not find any answer to,  can I send apps from the Deployment Server to the Deployer? Found this answer Deployment Server vs. Deployer   A deployment server is used to deploy apps to forwarders (and technically could be used to deploy apps to other Splunk servers as well but with a number of caveats)   Since Deployer is a Splunk server, I guess so.  It would be hard to maintain equal application on various server.
Hello all,  This is my first post here. I have been learning Splunk over the past few months and I am loving it.  I am running in to an interesting issue.  I am using the transaction command to g... See more...
Hello all,  This is my first post here. I have been learning Splunk over the past few months and I am loving it.  I am running in to an interesting issue.  I am using the transaction command to group events together.  An example of the queries below: index::web ealiest=-30m | transaction maxspan=3s result:  STARTED RECORD UPDATE: {"update"=>"contacts", "ordered"=>true, "updates"=>[{"q"=>{"_id"=>BSON::ObjectId('123456789')} COMPLETED record update {"status": 200} But if I append a table command to display the _raw field some of the characters are automatically encoded, as shown below: index::web ealiest=-30m | transaction maxspan=3s | table _raw result: STARTED RECORD UPDATE: {&quot;update&quot;=&gt;&quot;contacts&quot;, &quot;ordered&quot;=&gt;true, &quot;updates&quot;=&gt;[{&quot;q&quot;=&gt;{&quot;_id&quot;=&gt;BSON::ObjectId('123456789')} COMPLETED record update {&quotstatus&quot: 200}   I tried recreating this behavior by using makeresults, but in that case it works as I would expect. Does anyone have an idea of why this might be happening?    Thanks, Julio
I am trying to understand what would cause a variance in the volume used in our quota vs the log size it is ingesting. I have been failing to find an explanation, and am wondering if anybody else has... See more...
I am trying to understand what would cause a variance in the volume used in our quota vs the log size it is ingesting. I have been failing to find an explanation, and am wondering if anybody else has figured our the reason for the variance.  As an example, I am sending syslog from a wifi AP to our syslog server that is running a UF. On the syslog server,  on Oct 27 I am getting 484 lines of logs and those logs have 91,462 bytes in the log file.  In Splunk if I search for these events using this search:     index=network sourcetype=wifi | eval eventSize=len(_raw) | stats sum(eventSize) count by sourcetype     I also get 484 events but using len(_raw) function, I get length of characters of 90,978. I would assume these numbers should be pretty close and they are.  Now when I look in the _internal index for the *license_usage.log  metrics for the wifi sourcetype, using this query:     index=_internal sourcetype=splunkd source=*license_usage.log type=Usage st=wifi | stats sum(b) as bytes by st     I get a different sum of 103,794 bytes.  I am trying to determine how this could be or makes sense. This isn't a large difference (roughly 12%), but its spread out over every index and sourcetype and combined equals a large part of our license quota  that I cannot explain for.  Another example is our firewall logs, which is one of our largest indexes. For Oct 27, len(_raw) = 62,899,298,079 chars in length.  *license_usage.log = 81,139,209,296  This is a difference of roughly 22% over much larger percentage of our quota. There are other large indexes with similar variance.  I only have one production cluster, and I don't have a great way to verify this would be the same result on someone else's cluster. Do other people have this issue? I am trying to find a logical reason for why this would be the case. Things that I have tried to track down: 1) Using the License Usage dashboards in the Monitoring Console of our LM, on License Usage - Today I see numbers that align with the metrics using the len(_raw) metric. Using the Historical License Usage dashboard - If I switch to "NoSplit" parameter, I get numbers that also align with the len(_raw) metrics. If I change the Historical License Usage parameter to "SplitByIndex" I get numbers that align with the *license_usage.log metrics.  2) I have a support case opened to try to understand this difference. My SE told me that the "NoSplit" parameter (which is using the Type=RolloverSummary attribute in its base search) is the correct metric to measure license usage. My Support tech has told me that this is false, and that the "SplitByIndex" metric (using type=Usage) is the true count. Based on my manual measurements of the logs on the syslog server, I have to agree with the SE, but do not have any way to prove my LM is reporting incorrectly. 3) I have looked for duplicates using a variety of searches. Most show a couple of events, but we are talking less then 10, and its typically isolated on log sources with verbose or debugging output like DNS. 4) I have looked for misconfigured inputs.conf or outputs.conf files. This did yield some results. I found one SH that had multiple outputs.conf files that were cloning some of the data inputs originating from that SH (a WIN!!), but in regards to the syslog wifi and firewall sourcetypes, this doesn't seem to be the case. 5) I have reviewed the character encoding that is described here:  https://docs.splunk.com/Documentation/Splunk/latest/Data/Configurecharactersetencoding Every props.conf that I can find is set to CHARSET=UTF-8, which as I understand means Splunk is encoding all ingested logs using UTF-8. I thought this might be the culprit as higher order characters can take up multiple bytes in UTF-8. I do not think this is the case, as the syslog logs for wifi are only using the lower ASCII characters of UTF-8 from 0-127, which I believe are only taking up 1 byte per character. I am making this claim based on https://www.rfc-editor.org/rfc/rfc5424 and just visually looking at the logs, there is not that much to them. This also aligns with my manual measurement of the characters in the shell and using len(_raw).  Am I missing something here (or not understanding the underlaying process)? Is there other root causes I am overlooking? Does my cluster have some sort of issue, maybe with the configuration or architecture? Do others have this similar extra volume that is not easily explained, and this is normal behavior? Anything helps, thanks.
Hello. We are trying to upgrade our Splunk UFs from 8.1.9 to 9.0.1. We use a configuration management tool (Puppet) to upgrade our UFs. This has worked fine until the leap to 9.0.1. We are experienc... See more...
Hello. We are trying to upgrade our Splunk UFs from 8.1.9 to 9.0.1. We use a configuration management tool (Puppet) to upgrade our UFs. This has worked fine until the leap to 9.0.1. We are experiencing a timeout during a kvstore migration. This appears to be a known issue, and I employed the solution found here.  The solution (or workaround) appears simple: Edit server.conf and add this: [kvstore] disabled = true And that works. However, I'm concerned about the consequences of this fix. I'm not familiar with the kvstore or how it works, etc. Is there any harm in using this workaround?  Thank you in advance.
Hi, I have developed an app with my AppBuilder and am looking to go through the validation process. However - I am getting an error when logging in. I changed the password several times and still... See more...
Hi, I have developed an app with my AppBuilder and am looking to go through the validation process. However - I am getting an error when logging in. I changed the password several times and still having an issue. The error response from login is below. Also - I am unable to get to the support case portal either. Ive got my account admin also looking at why I can't get in. Trying to get to force support redirects me back to an error page. Is someone around here able to point me in the right direction or turn a knob somewhere.     https://www.splunk.com/404?ErrorCode=18&ErrorDescription=Invalid+account HTTP/2 400 Bad Request Date: Fri, 28 Oct 2022 16:55:31 GMT Content-Type: application/json Content-Length: 234 Server: nginx/1.20.0 X-Amzn-Requestid: 4d6957db-ac76-4df7-b0fa-bc3c480806ef X-Amz-Apigw-Id: auZsdE5BPHcFz1Q= X-Amzn-Trace-Id: Root=1-635c0982-2cd11e0872d8f47027637090 {"errors":"oauth2: cannot fetch token: 400 Bad Request\nResponse: {\"error\":\"invalid_grant\",\"error_description\":\"The credentials provided were invalid.\"}","msg":"Failed to authenticate user","status":"error","status_code":400}      
I have a unique query that I think I have a general logical approach to solving, but the syntax and most efficient route is TBD>>  Case to solve is this: Users are assigned positions in an applica... See more...
I have a unique query that I think I have a general logical approach to solving, but the syntax and most efficient route is TBD>>  Case to solve is this: Users are assigned positions in an application, each position is unique.  Positions are assigned security groups that are mapped to roles. We are versioning this mapping into splunk for two reasons.  1 to be able to rewind and show who was in what groups so that we can do whatif scenarios 9 months back without trying to figure out what has changed etc. and 2.  We want to analyze overlap in positions to roles to help simplify where necessary.  The latter is the basis of my question. I have a table created off a makemv/mvexpand that creates a cube of data that has  Position, GroupName There are say 99 unique positions and 70 unique security groups. Expanded I have just north of 1200 permutations of them Position1, SecGroup1 Position1, SecGroup2 Position2, SecGroup2 Position2, Secgroup5 Position3, SecGroup1 Position4, SecGroup2 Etc What I need to do is create stats on the relationship of overlap where positions are in similar groups> I know for instance that in my current data set that ALL positions are in SecGroup1 and 68/99 are in SecGroup2 This is easily calculated for one group, but how do I extend this out at scale for all group? I am thinking of creating a deduplicated list of security groups, and creating a full list of all combinations of  (SecGroup1 AND SecGroup2) OR (SecGroup1 AND SecGroup3)  until that goes in reverse and deduplicating that list and using that list as a subsearch to my raw data and then running stats on it that I would think in theory would show where two PD's overlap because of the same two groups? Is there a more succinct way of doing this?  Can one create this list with | foreach to a foreach to create this list?  How in splunk can one calculate a list of permutations and force an AND between them as a part of a subquery?
I have an index that snapshots an inventory system every day.  The inventory is a list of all active circuits.  There is a timestamp and date of when the snapshot was taken, plus other details.  Only... See more...
I have an index that snapshots an inventory system every day.  The inventory is a list of all active circuits.  There is a timestamp and date of when the snapshot was taken, plus other details.  Only active circuits are included.   I also have a lookup file where we tried to install a new piece of equipment.  This has the date and time of when we tried to install, which circuit we tried to install on and if it was successful or not.   I'm trying to join the lookup to the index where the date in the index is the day prior to the date of the installation.  I only want 1 day prior, not closest date matching.  Time is not important, only the date.   Here's my search so far:   index="myindex" sourcetype="mysource"| fields identifier_1 identifier_2 |eval active_date=strftime(load_date, "%m/%d/%Y") | join type=inner "identifier_1" [|inputlookup mylookup.csv | rename ID_1 as identifier_1| eval fail_date=strftime(EVENT_TS, "%m/%d/%Y")| where active_date=fail_date-1] I'm sure that this is possible, but I'm getting errors.  Any help or suggestions would be appreciated
Hello Everyone. I'm trying to find a way to use the eval command to determine whether or not a field in my stats table has more than one value.  Here is the scenario......I have two columns - IP Addr... See more...
Hello Everyone. I'm trying to find a way to use the eval command to determine whether or not a field in my stats table has more than one value.  Here is the scenario......I have two columns - IP Address in column A and userID in column B. The userID field may have more than one value. I'd like to evaluate the UserID field in each line to determine if there is more than one UserID listed. If there is, I'd like the eval command to place a value of 1 in column C, and if only one userID, to place a value of 0 in column C. Any help would be very appreciated. Thanks!
Hi all, I am new to Splunk and am trying to look for logs that indicate that the SplunkD service shutdown. I am trying this, but I am not sure if there's a better one:         index=_internal so... See more...
Hi all, I am new to Splunk and am trying to look for logs that indicate that the SplunkD service shutdown. I am trying this, but I am not sure if there's a better one:         index=_internal sourcetype="splunkd" keywords "*shut"      
I'm seeing an authentication failure for the SavedSearchFetcher in all of my SHC members logs repeating every 30 seconds, as follows: 10-28-2022 12:52:41.505 +0000 ERROR UserManagerPro [63886 SavedS... See more...
I'm seeing an authentication failure for the SavedSearchFetcher in all of my SHC members logs repeating every 30 seconds, as follows: 10-28-2022 12:52:41.505 +0000 ERROR UserManagerPro [63886 SavedSearchFetcher] - Did not find any info for user=<user redacted> 10-28-2022 12:52:41.726 +0000 INFO AuthenticationProviderSAML [63886 SavedSearchFetcher] - Calling authentication extension function=getUserInfo() for user=<user redacted> 10-28-2022 12:52:42.426 +0000 ERROR AuthenticationProviderSAML [63886 SavedSearchFetcher] - Authentication extension function=getUserInfo() returned status=failed for user=<user redacted> 10-28-2022 12:52:42.426 +0000 ERROR AuthenticationProviderSAML [63886 SavedSearchFetcher] - Error message from function=getUserInfo() : Unable to get user info for username=<user redacted>. This script only officially supports querying usernames by the User Principal Name, Object ID, or Email properties. To use other user properties, use the 'azureUserFilter' argument and search the Microsoft documentation for a full list of properties: "user resource type - Microsoft Graph v1.0" / "Properties" The <user redacted> does not exist in our SHC nor is there such a user in our SSO system that is supplying the SAML response to our authentication extension.  We have 100's of users and 100's of saved searches, alerts, and reports running and this is the only occurrence of this situation.   So I have two questions that I cannot answer from my investigation of the logs: How can I find the source of these SavedSearchFetcher calls to the authentication extension?  Before you say look in your saved searches, remember the <user redacted> does not exist on the SHC nor in our SSO IdP.  I have looked at all SHC members ~/etc/user directories for a user that matches <user redacted> but that doesn't exist either.  Bottom line, there are no saved searches for <user redacted>.  I've also searched all the savedsearches.conf files for the <user redacted> string (e.g deprecated userid setting) and there are none of those either.  I've looked at the splunkd.log file before and after these logs (INFO level logging) and there is no help which makes some sense because this is an authorization failure, so nothing more should be happening.    What other ways trigger the SavedSearchFetcher other than a REST call or a scheduled search?  I have correlated the logs for REST calls to these logs and there is nothing matching the frequency.  And as stated above I cannot fine a scheduled search that matches either.  So where else could this be coming from? Thanks in advance for your help with this.  
Hello again community Today I received notice that on every Friday morning at a particular time there are a lot of new sessions registered in the firewall log, apparently caused somehow by Splunk. ... See more...
Hello again community Today I received notice that on every Friday morning at a particular time there are a lot of new sessions registered in the firewall log, apparently caused somehow by Splunk. The question was passed down, why? So I played around with the metrics log, input/output etc. Though I cannot se any corelated increase or decrease in the numbers observed around the same time. What I ended up with was alterations of   index=_internal source=*metrics.log group=tcp<in|out>_connections | timechart count by host useother=false   My question, is this a reasonable approach? Otherwise, what would be a better search to get the number of newly established connections between members of the Splunk infrastructure to figure out if any components are establishing a higher number of new connections? All the best  
Team,  We are trying to use Splunk DB connect app's "Outputs" option to insert data in MySQL table.  For this we are using DBX output alert action  in Splunk alert and calling the "Output" from  Sp... See more...
Team,  We are trying to use Splunk DB connect app's "Outputs" option to insert data in MySQL table.  For this we are using DBX output alert action  in Splunk alert and calling the "Output" from  Splunk db connect app. Followed the below steps: 1) Created a new Output named "insertdb_cronjob" under "Outputs" Option in Splunk db connect app 2) Created an alert and   3) Added the trigger actions as DBX output alert action where its calling  "insertdb_cronjob" to insert data into MySQL table. 4) Manually execution is working fine but when the alert is triggered, getting below error in internal logs: 10-28-2022 12:00:01.960 +0530 INFO  sendmodalert [3425498 AlertNotifierWorker-0] - Invoking modular alert action=alert_output for search="si_alert" sid="rt_scheduler__admin_c3BsdW5rX2FwcF9kYl9jb25uZWN0__RMD5f6c0cb1e9f4fe73f_at_1666933094_23674.0" in app="splunk_app_db_connect" owner="admin" type="saved" 10-28-2022 12:00:02.065 +0530 ERROR sendmodalert [3425498 AlertNotifierWorker-0] - action=alert_output STDERR -  Error: Could not find or load main class com.splunk.dbx.command.DbxAlertOutput 10-28-2022 12:00:02.068 +0530 INFO  sendmodalert [3425498 AlertNotifierWorker-0] - action=alert_output - Alert action script completed in duration=105 ms with exit code=1 10-28-2022 12:00:02.068 +0530 WARN  sendmodalert [3425498 AlertNotifierWorker-0] - action=alert_output - Alert action script returned error code=1   Can you please suggest if we are missing anything. Thanks, Dinesh
Hello, I have a corrupted warm bucket. What I am trying to do is to find out is the time interval of the events stored in this bucket. I found the file buckt_info.csv where I have _indextime_et tha... See more...
Hello, I have a corrupted warm bucket. What I am trying to do is to find out is the time interval of the events stored in this bucket. I found the file buckt_info.csv where I have _indextime_et that I assume is indextime earliest which means the time the 1st event of the bucket was indexed, right? how can I find the time range of events in a bucket? in other words, is there a way to find the 1st event indexed in a backet and the last one? any help will be appreciated. thank you  
In my SPL I use the associate command.  However, I've noticed that when I use the command, any previous preliminary search results before the associate command are not available for use after the com... See more...
In my SPL I use the associate command.  However, I've noticed that when I use the command, any previous preliminary search results before the associate command are not available for use after the command.  Why is that and how can I save earlier search results until after the associate command for use?
We want to use the Splunk Add-on for Microsoft Cloud Services for the ingestion of data from the Azure Active Directory. For this we send data from our LogAnalyticsWorkspace (and an EventHub) to the ... See more...
We want to use the Splunk Add-on for Microsoft Cloud Services for the ingestion of data from the Azure Active Directory. For this we send data from our LogAnalyticsWorkspace (and an EventHub) to the Splunk TA (which is working). Unfortunately, the documentation is not very precise about the source types to use. Which source types can I use for the following data? Sign-ins (azure:monitor:aad ?) Audit data (azure:monitor:aad ?) AAD Risky Users User Risk Events? Thanks.