All Topics

Top

All Topics

Is it possible to configure heavy forwarders to send data to two tcpout groups (A,B) (outputs.conf) and don't  block on group B failure? We want to send all data to group A, and a subset of data... See more...
Is it possible to configure heavy forwarders to send data to two tcpout groups (A,B) (outputs.conf) and don't  block on group B failure? We want to send all data to group A, and a subset of data (specific sourcetypes) to group B, but group B is in a remote location and our link to that location is not fully stable and we don't want to event loss in group A on link failures or group B failures.   [tcpout] [tcpout:groupA] server=indexerA1_ip:9997,indexerA2_ip:9997 [tcpout:groupB] server=indexerB_ip:9997    
I have a dbquery ouput that looks like the below, unfortunately i cant update the actual database query to make it more readable...    2022-12-16 21:30:17.689, TO_CHAR(schema.function(MAX(columnA... See more...
I have a dbquery ouput that looks like the below, unfortunately i cant update the actual database query to make it more readable...    2022-12-16 21:30:17.689, TO_CHAR(schema.function(MAX(columnA)),'MM-DD-YYHH24:MI')="12-16-22 16:29"   I am trying to whether time the 2 times  at the begining and end of the results are within 15 mins of each other. I have tried renaming the column from the long stupid string but i cant get that working using the rename function.  does anyone have any ideas how to rename (or if i even need to) and then evaluate whether the times are within 15 mins of each other? the query i ran to get the above is just <index="abc">  
Are .p12 and .pfx files required to use Splunk after initial install?
Community, I am attempting to retrieve events in Splunk regarding Tenable vulnerability data.  The goals are as follows: Obtain the most recent information for a given vulnerability ID and devi... See more...
Community, I am attempting to retrieve events in Splunk regarding Tenable vulnerability data.  The goals are as follows: Obtain the most recent information for a given vulnerability ID and device pair. Filter out any vulnerabilities that have a "severity" equal to "informational" AND/OR Filter out any vulnerabilities that have a state of "fixed"   The issue I have encountered, is that the "fixed" vulnerability may be the most recent status.  So, simply filtering that value out for a specific vulnerability ID and device combination will result in that vulnerability ID for that device showing up in the result set. (even though the vulnerability has been "fixed" in this case) --- don't want IT chasing "fixed" vulnerabilities. In reality what I want to see is the most recent vulnerability for a given device if the severity is not equal to "fixed" and/or the vulnerability severity is not "informational" (the reason behind this is that some vulnerability severities are reduced over time due to various conditions --- where they may have started out as "high" are now "informational" or vice versa) --- otherwise do not list that device and vulnerability ID pair at all in my result set. Here is how far I have gotten to date:       `get_tenable_index` sourcetype="tenable:io:vuln" [ search index="tenable" sourcetype="tenable:io:assets" deleted_at="null" | rename uuid AS asset_uuid | stats count by asset_uuid | fields asset_uuid ] | rename plugin.id AS Plugin_ID asset_uuid AS Asset_ID | strcat Asset_ID : Plugin_ID Custom_ID | stats latest(*) as * by Custom_ID << The problem here is that the latest might be "fixed" or "informational" which in this case I want to ignore (if either of those is true). | rename plugin.cvss_base_score AS CVSS plugin.synopsis AS Description plugin.name AS Name plugin.cve{} AS CVE output AS Output severity AS Risk plugin.see_also{} AS See_Also plugin.solution AS Solution state AS State plugin.has_patch AS Patchable plugin.exploit_available AS Exploitable plugin.exploited_by_malware AS Exploited_By_Malware plugin.publication_date AS Plugin_Publish_Date | table Custom_ID, CVSS, Description, Name, CVE, Plugin_ID, Output, Risk, See_Also, Solution, State, Asset_ID, Patchable, Exploitable, Exploited_By_Malware, Plugin_Publish_Date tags{}.value        
I am using rex field to extract the field name and then inject the data so I can get only the desired fields but not able to do so. My Access logs:  server - - [date& time] "GET /google/page1/page... See more...
I am using rex field to extract the field name and then inject the data so I can get only the desired fields but not able to do so. My Access logs:  server - - [date& time] "GET /google/page1/page1a/633243463476/googlep1 HTTP/1.1" 200 350 85 My search query: <query> | rex field_=(?<SRC>\d+\.\d+\.\d+\.\d+).+\]\s\"(?<http_method>\w+)\s(?<serviceName>/[^/]+)(?<uri_path>[^?\s]+)\s(?<uri_query>\S+)\"\s(?<statusCode>\d+)\s(?<body_size>\d+)\s\s(?<response_time>\d+) My Search query with lookup <query> | rex field=_raw "(?<SRC>\d+\.\d+\.\d+\.\d+).+\]\s\"(?<http_method>\w+)\s(?<serviceName>/[^/]+)(?<uri_path>[^?\s]+)\s(?<uri_query>\S+)\"\s(?<statusCode>\d+)\s(?<body_size>\d+)\s\s(?<response_time>\d+)" |search serviceName="/google" | lookup abc.csv uri_path OUTPUT serviceName apiName | search searviceName=* operationName=* I am using above query to lookup from csv file but I get all the api has same count and not able to get the stats or logs for only particular. Is there a way to match this and produce result with both uri_path and api_name? can anyone please help me on this? Eg: csv file looks like this and i am trying to match apiName and uri_path so the logs are getting properly. serviceName uri_path http_method apiName /google /page1/page1a/*/googlep1 post postusingRRR /google /page1/page1a/sada/*/googlep1 get getusingep2 /google /pag5/ggg/*/ooopp/ggplr delete deleteusing
I am using the following query to get the results  index=abc node=* | chart latest(state) as state by node | stats count by state | sort - state   Below is the column chart display of it.I w... See more...
I am using the following query to get the results  index=abc node=* | chart latest(state) as state by node | stats count by state | sort - state   Below is the column chart display of it.I want to display each state by a custom color    I tried using the below line in xml but its not changing <option name="charting.fieldColors">{"Allocated":0x333333,"DOWN":0xd93f3c,"IDLE":0xf58f39,"Minor":0xf7bc38,"Notice" :0xeeeeee,"Healthy":0x65a637}</option>
I have a question regarding KPI Threshold in Splunk ITSI. While using cloning action, all KPI thresholds created will inherit the Timezone attribute of the user that cloned it. Could any one give me ... See more...
I have a question regarding KPI Threshold in Splunk ITSI. While using cloning action, all KPI thresholds created will inherit the Timezone attribute of the user that cloned it. Could any one give me working example. Thanks in advance.
How do we relate  index=_audit action=search search=* user!=splunk-system-user provenance!=scheduler | table _time user search host total_run_time result_count | sort - _time'  query to determin... See more...
How do we relate  index=_audit action=search search=* user!=splunk-system-user provenance!=scheduler | table _time user search host total_run_time result_count | sort - _time'  query to determine the SVC usage of the results 
We're sending logs to SplunkCloud over port 514 using the following stanza in inputs.conf   [udp://514] index=syslog disabled=false sourcetype=syslog   This works great, however we are now se... See more...
We're sending logs to SplunkCloud over port 514 using the following stanza in inputs.conf   [udp://514] index=syslog disabled=false sourcetype=syslog   This works great, however we are now sending more than one type of log this way.  Can we declare multiple sourcetypes depending upon where the origin of the logs is?  For example: if they are from IP address A give it the "firewall" sourcetype and from IP address B give it the "crontab" sourcetype?    
Hi, I have table below then I need to grouping field and need to eval (+ )the value become below table Help please..
Hello Splunkers Need your help to get the desired result. Below is the sample query for reference. | makeresults | eval week_year="2022-48",group="ABC",old=64,new=78 | append [| makeresults ... See more...
Hello Splunkers Need your help to get the desired result. Below is the sample query for reference. | makeresults | eval week_year="2022-48",group="ABC",old=64,new=78 | append [| makeresults | eval week_year="2022-48",group="XYZ",old=35,new=15] | append [| makeresults | eval week_year="2022-49",group="XYZ",old=33,new=17] | append [| makeresults | eval week_year="2022-49",group="ABC",old=215,new=158] | fields - _time | eval target1=round((old/new)*0.17,3)*100,target2=round((old/new)*0.26,3)*100,final=round(old/new,3)*100 | table week_year group final target1 target2 |chart last(final) as final values(target1) as target1 values(target2) as target2 over group by week_year But since values() is used we are getting target fields for each week. But expected outcome is to get one each line for target1 & target2.     Please help me to get the visualization in correct format. Thanks in advance!!  
I am looking at building a homelab for splunk. Any suggestions for minimum HW? I can't really do 8 cores / 64 GiB. 6 cores / 32 GiB would be feasible
My count field is right justified and so far from the description. Is it possible to either left justify the content or right justify the count field in a table?
Hi, I need to know about the server visibility license in Appd ^Edited by @Ryan.Paredez for formatting and Searchability 
Hello    I have the problem of 404 error when I move to Splunk support portal from splunk.com. How do I fix that?
Hi, My Company (Länsförsäkringar AB) in Sweden use Splunk and my Team use Universal Forwarder Agent. I wonder if there is possibillity to Subscribe to get Info-mail as soon New Version of Universal... See more...
Hi, My Company (Länsförsäkringar AB) in Sweden use Splunk and my Team use Universal Forwarder Agent. I wonder if there is possibillity to Subscribe to get Info-mail as soon New Version of Universal Forwarder Agent is released? (Windows version) Regards //Slobodan Mitrasinovic, +46 768 592717
Why do I get the following messages in splunkd.log after installing Splunk Universal Forwarder in a GCP instance? 12-16-2022 10:49:12.021 +0000 WARN AwsSDK [1903 ExecProcessor] - ClientConfiguratio... See more...
Why do I get the following messages in splunkd.log after installing Splunk Universal Forwarder in a GCP instance? 12-16-2022 10:49:12.021 +0000 WARN AwsSDK [1903 ExecProcessor] - ClientConfiguration Retry Strategy will use the default max attempts. 12-16-2022 10:49:12.021 +0000 WARN AwsSDK [1903 ExecProcessor] - ClientConfiguration Retry Strategy will use the default max attempts. 12-16-2022 10:49:12.023 +0000 ERROR AwsSDK [1903 ExecProcessor] - EC2MetadataClient Http request to retrieve credentials failed with error code 404 12-16-2022 10:49:12.023 +0000 ERROR AwsSDK [1903 ExecProcessor] - EC2MetadataClient Can not retrive resource from http://169.254.169.254/latest/meta-data/placement/availability-zone  
Hi All, I'm to trying to set an email alert notification by using splunk. In the alert Description, I just want to mention only particular field values that search returns. I thought of using $result... See more...
Hi All, I'm to trying to set an email alert notification by using splunk. In the alert Description, I just want to mention only particular field values that search returns. I thought of using $result.fieldname$ but, As splunk says it only returns field first row value in the description. For Example: Field name:    values numbers        1,2,3,4,5 search: index=""|table numbers alert Description: The number values are: $result.numbers$ O/P: The number values are: 1 O/p Expected: The number values are: 1,2,3,4,5
A new disk was attached to a windows server 2012. Restarted machine agent. Still the new disk not showing up in AppD under disks. Please assist. 
Hello,   i'm experiencing an issue with the splunk TA for O365 and in particular with the Sharepoint Management Activity Logs. The issue is this: 1) 10:00 AM i activate the input 2) 10:01 A... See more...
Hello,   i'm experiencing an issue with the splunk TA for O365 and in particular with the Sharepoint Management Activity Logs. The issue is this: 1) 10:00 AM i activate the input 2) 10:01 AM Splunk starts to collect 10:00 AM events 3) 10:05 AM Splunk continues to collect Sharepoint logs but going behind in time! (9:59 AM, 9:58 AM and so on) 4) 11:00 AM Splunk is still collecting logs in the past but the temporary token expires and the input is closed and reopened 5) 11:00 AM Splunk reopen the input 6) 11:01 AM Splunk starts to collect 11:00 AM events 7) JUMP to step 3 but 1 hour later   May you know how to not ask splunk to go behind and starts to collect in time?   Regards   Marco