All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a search that is generating the results like below. I need a search where if TAC, CellName and Date are same in 2 rows, it would remove those rows where SiteName and Address is "NULL", and if ... See more...
I have a search that is generating the results like below. I need a search where if TAC, CellName and Date are same in 2 rows, it would remove those rows where SiteName and Address is "NULL", and if the TAC, CellName and Date are different in 2 rows, rows with "NULL" value for field SiteName and Address remains.  
Splunk web was working fine. We need to add our Sonicwall firewall sys logs to Splunk add data input. Guide us to configure Data inputs and index data automatically. Is it possibile to configure so... See more...
Splunk web was working fine. We need to add our Sonicwall firewall sys logs to Splunk add data input. Guide us to configure Data inputs and index data automatically. Is it possibile to configure sonicwall with out a UF.
Can someone please explain steps to create ticket in ServiceNow from Splunk alert. I did found these links  Use alert-triggered scripts for the Splunk Add-on for ServiceNow  Use custom alert ac... See more...
Can someone please explain steps to create ticket in ServiceNow from Splunk alert. I did found these links  Use alert-triggered scripts for the Splunk Add-on for ServiceNow  Use custom alert actions for the Splunk Add-on for ServiceNow  But before I dig deep into above, just want to know if there is anyone in this group who is already doing this? if yes , what's the best way to get this done? Thank You.  
Hi, Can someone suggest to me a method to ensure, my scheduled report will run without being skipped. Cron = 8,18,28,38,48,58 * * * * with a schedule window of 15 minutes. I use a custom timefram... See more...
Hi, Can someone suggest to me a method to ensure, my scheduled report will run without being skipped. Cron = 8,18,28,38,48,58 * * * * with a schedule window of 15 minutes. I use a custom timeframe larger than required to cater for when the report is skipped. Generally the report will run 2 times an hour sometimes 3, but at times does not run for a full hour. When I run the report adhoc, it takes less than a minute.
Shouldn't the "Default value" for this 'Add-on Setup Parameter' get saved in the respective conf file's default file? Or do anything, for that matter? I see it the 'Display initial text' renders when... See more...
Shouldn't the "Default value" for this 'Add-on Setup Parameter' get saved in the respective conf file's default file? Or do anything, for that matter? I see it the 'Display initial text' renders when the Configuration UI is loaded but since no value for this is set in the 'default' file there effectively is no value set anywhere - resulting in a broken add-on since it's not "fully configured". Said another way: The default values for Add-on Setup Parameters don't seem to get saved into the respective default conf file created (default/<ta_name>.conf). Conversely, any default values for the Data Input properties do get saved in the proper place in default/inputs.conf . I see some of this info is saved in the <ta_name>_rh_settings but that seems to only handle the setup pages. The result of this missing default config is that when I try to save a new instantiation of the input it won't work because it's missing those critical Add-on Setup Parameters. I'm not AoB expert so maybe I'm doing something wrong here? Cross post: https://splunk-usergroups.slack.com/archives/C04DC8JJ6/p1659404655720859
Hi, I have many logs like this    {"line":{"timestamp":"2022-07-27T20:35:32.756Z","level":"DEBUG","thread":"http-nio-8080-exec-4","mdc":{"clientId":"9AuZjs2vQMCfAYpSB","requestId":"62d-b003-3af... See more...
Hi, I have many logs like this    {"line":{"timestamp":"2022-07-27T20:35:32.756Z","level":"DEBUG","thread":"http-nio-8080-exec-4","mdc":{"clientId":"9AuZjs2vQMCfAYpSB","requestId":"62d-b003-3aff82daddc9","requestUrl":"http://example.com","requestMethod":"POST","apigeeRequestIdHeader":"rrt-0e9fc19850 378837932","requestUri":"/v1/exchanges","userId":"ZWJ5FWLNM"},"logger":"com.eServiceImpl","message":"ChangeSet is not Valid. Error count is : 4. Aggregate error message is : Property of type 'source.acc-1.0.0' is missing required property 'schemaNamespace'.\nProperty of type 'source.acc-1.0.0' is missing required property 'sourceId'.\nProperty of type 'host.acc-1.0.0' is missing required property 'fileUrn'.\nProperty of type 'host.acc-1.0.0' is missing required property 'versionUrn'..Total time for validation is : 0ms"},"source":"stdout","tag":"cd76691","attrs":{"cloudos.portfolio.version":"0.1.2001","com.amazonaask-arn":"arn:aus-west-716:task/COSV2-C-UW2/5e563a4","docker.image":"artifactory.devcloud.net/oud/001","obs.mnkr":"fdxs-abcd22"}}     Success validations are identified by the string "ChangeSet is Valid" in line.message field and failed validations are identified by the string "ChangeSet is not Valid" in line.message as shown above. Now I want a query to get results of % of failed and passed events by line.mdc.clientId field. Please Help! Output: ClientId     | failed %. | failed events (number).  | pass %  | passed events (number) A.                 | X%            |. a                                              | Y%          |.  b . .     Basic search | line.message="ChangeSet is*"    The above search is the basic search where I want the grouping of results (as discussed above in example as per clientId) to be happened .
eStreamer sending about 12 logs per minute and each log is about 30 mg this is causing an issue with the license consumption, we get license violation every day what setting can I change to reduce ... See more...
eStreamer sending about 12 logs per minute and each log is about 30 mg this is causing an issue with the license consumption, we get license violation every day what setting can I change to reduce the number of logs and the size of the logs   thank you  
I wanted to compare a Lookup with a Search: Ex: Lookup "list_host_lookup.csv" Server AA BB CC DD EE FF GG Search index=abcddf sourcetype | dedup Host | table HOST STATUS HOST STAT... See more...
I wanted to compare a Lookup with a Search: Ex: Lookup "list_host_lookup.csv" Server AA BB CC DD EE FF GG Search index=abcddf sourcetype | dedup Host | table HOST STATUS HOST STATUS AA Active BB Active CC Off DD Active GG Off HH Active II Off If the lookup host (list_host_lookup.csv) is not in the Search or if it is in the Search and is "Off", create a "NOK" field. If the lookup host (list_host_lookup.csv) is in the Search or if it is in the Search and is "Active", create an "OK" field.
Hi, I want the alert to trigger if there are extracts where TOTAL_PIECES >0 and RETRIEVAL_ATTEMPT= 10 Is there anybody can help with this please? My search is, index=A source=B sourcetype=c ... See more...
Hi, I want the alert to trigger if there are extracts where TOTAL_PIECES >0 and RETRIEVAL_ATTEMPT= 10 Is there anybody can help with this please? My search is, index=A source=B sourcetype=c | fillnull value=0 TOTAL_PIECES RETRIEVAL_ATTEMPT | where RETRIEVAL_ATTEMPT= 10 | rename "SASP_CTRL_SEQ_NBR" as "Extract_Seq_ID" ,"IV_STS" as "IV_Status", "RETRIEVAL_ATTEMPT" as "Retrieval_Attempt","PSTG_STMT_N" as "Pos_St","TOTAL_PIECES" as "Piece_Count" | table "Extract_Seq_ID","IV_Status","Retrieval_Attempt","Pos_St","Piece_Count"  
Hello, I have a Sonicwall TZ600 with both Syslog on 514 and log autmation over to an ftp folder on the Splunk server. I do see data but I am not sure any of it is relevant. Are there any good, ... See more...
Hello, I have a Sonicwall TZ600 with both Syslog on 514 and log autmation over to an ftp folder on the Splunk server. I do see data but I am not sure any of it is relevant. Are there any good, recent, guides for setting up a Sonicwall with Splunk so I can see interface usage and other key metrics? I'm new to Splunk and am trying to focus on learning through the setup of this device. Thanks.
Hello Splunk Community, History of problem: I recently was trying to update OSSEC agents and some needed to be reinstalled to be fixed. So in my plan of action for targeting, the main OSSEC serve... See more...
Hello Splunk Community, History of problem: I recently was trying to update OSSEC agents and some needed to be reinstalled to be fixed. So in my plan of action for targeting, the main OSSEC server got targeted and reinstalled as an agent instead, thus having me to reconfigure everything becauase we did not have a backup. After a weekend of configuration recovery, all 100+ agents got reconnected and authenticated with new keys and updated configs for both the server and agents. Everything is connected and communicating, reporting alerts to the alerts.log, local_rules.xml file is updated. We use Splunk Forwarder to forward the logs and we get log information, but it looks like it's not being processed correctly....this is our issue. Problem and End-Result Needed: Splunk is receiving information from OSSEC server but the data is having trouble being processed by the IDS data model. Splunk field for Log "sourcetype" seems to be the root of the issue. We need to have Datamodels process this information or have the Splunk OSSEC add-on properly configured because we have the path on the splunk server, but not fully configured: /opt/splunk/etc/apps/Splunk_TA_ossec/ What we noticed in Splunk search: Before Sourcetype=alerts Current needing fix Sourcetype=alerts-4 and Sourcetype=alerts-5 IDS (Intrusion Detection) Splunk data model needs to process logs by severity and update the dashboard accordingly Splunk Add-on is not properly configured Some pictures are attached of the issue. --------------------- Resources --------------------- https://docs.splunk.com/Documentation/AddOns/released/OSSEC/Setup https://docs.splunk.com/Documentation/CIM/5.0.1/User/IntrusionDetection https://docs.splunk.com/Documentation/AddOns/released/OSSEC/Sourcetypes
I have an event that came in the same time but have different data values that I need to separate.  Example _time example A 2022-09-02 dgde746gdhu4 duyheuye4d0 I n... See more...
I have an event that came in the same time but have different data values that I need to separate.  Example _time example A 2022-09-02 dgde746gdhu4 duyheuye4d0 I need this: _time example A 2022-09-02 dgde746gdhu4 2022-09-02 duyheuye4d0
Gurus I am working on a Studio Dash and I would like to add the output of a transaction the way it is usually shown in the search gui for debugging purposes so I can easily see if the transaction i... See more...
Gurus I am working on a Studio Dash and I would like to add the output of a transaction the way it is usually shown in the search gui for debugging purposes so I can easily see if the transaction is correct. Turns out the only option I seem to have is a table but here I only get the raw msg. That's ugly and unreadable, of course, since the newlines are merged into one.  Is there a way to do this within a dashboard and make the message look just like in the search gui ? Perhaps I could re-insert the newlines ?   Thx
I'm curious what the best way to test if a directory exists on a server (Windows/ NIX*) and if it exists have the deployment server push the appropriate app out to the given server to pick up the log... See more...
I'm curious what the best way to test if a directory exists on a server (Windows/ NIX*) and if it exists have the deployment server push the appropriate app out to the given server to pick up the logs. I've been told that it's best not to just push out all apps to all servers, so I'm trying to more selective. At the moment we run a script (bash, powershell) on the local server with Splunk and then create custom inputs.conf files to have them send the logs we need. However, this prevents the deployment server from managing those apps. I'm curious if there's a better way to do this? So we can manage the apps through the deployment server and don't have these one off scenarios that we have to document, so others know about them.
Not sure I am missing something, but the Correlation Searches provided by ESCU are not consistent in their results. Some result is the user being indentified as in a field user_id, some in a field Us... See more...
Not sure I am missing something, but the Correlation Searches provided by ESCU are not consistent in their results. Some result is the user being indentified as in a field user_id, some in a field UserID This is inconsistent (which I could live with), but does not match up to the fields used (by default) to identify users within Enterprise Security - Incident Review. So I need to add them to the "Incident Review - Event Attributes".  In addition, if I am using Data Enrichment, then I also need to add to "Incident Review - Event Attributes" fields like UserID_email, UserID_bunit, UserID_category, etc.... If the ESCU could have their correlation search return a more "standard" set of fields as results, then it would make things work more "out of the box"   I appreciate that I might have missed something obvious, I and I hope I have - I value all replies
Below is the sample input for my search   BusinessIdentifier : 09 ***** MessageIdentifier : 3308b7dd-826c-4e98-8511-6a018c5f8bcc ***** TimeStamp : 2022-03-16T11:08:30.013Z ***** ElapsedTime : 0.2... See more...
Below is the sample input for my search   BusinessIdentifier : 09 ***** MessageIdentifier : 3308b7dd-826c-4e98-8511-6a018c5f8bcc ***** TimeStamp : 2022-03-16T11:08:30.013Z ***** ElapsedTime : 0.25 ***** InterfaceName : NLTOnline ***** ServiceLayerName : OSB ***** ServiceLayerOperation : CreateQPBillingEvents ***** ServiceLayerPipeline : requestPipeline ***** SiteID : ***** DomainName : ***** ServerName : DEVserver ***** FusionErrorCode : ***** FusionErrorMessage : ***** <Body xmlns="http://schemas.xmlsoap.org/soap/envelope/"><com:createQPBillEvents xmlns:com="com.alcatel.lucent.on.ws.manager"> <com:ACTION_DATE>2021-08-30T23:59:59+08:00</com:ACTION_DATE> <com:ADR_BLDG_TYPE>HDB</com:ADR_BLDG_TYPE>   =============   I need to extract the values of the below    ElapsedTime : 0.25   InterfaceName : NLTOnline ServiceLayerName : OSB   ServiceLayerOperation : CreateQPBillingEvents  ServiceLayerPipeline : requestPipeline  Using xmlkv its not working. can someone help to provide the right command?
Dear Splunkers, I want to add a drill down link to my dashboard that redirects to a remote website. Currently, I do it with the following URL using the <link> tab inside drill down.     <... See more...
Dear Splunkers, I want to add a drill down link to my dashboard that redirects to a remote website. Currently, I do it with the following URL using the <link> tab inside drill down.     <link>http[:]//website.com/param1=xyz</link>   The problem is when the user clicks on the link the param1=xyz is part of the URL and is visible in the browser. Does drilldown support HTTP POST so that I can hide the param1=xyz from being displayed in the browser? Regards.
Hi can anyone think of a way to get Splunk versions reported from universal forwarders when in a Intermediate forwarder environment. I have tried searches like  index=_internal sourcetype=splunkd... See more...
Hi can anyone think of a way to get Splunk versions reported from universal forwarders when in a Intermediate forwarder environment. I have tried searches like  index=_internal sourcetype=splunkd group=tcpin_connections but it only returns the agent version of the intermediate layer, not the UF versions behind it. Are there any commands that can be deployed via to each UF to collect that information?
I have scheduled a Splunk report and set the search Time frame as Previous Week. The report I am getting is for Sunday to Saturday results. But I want the search to happen from Monday to Sunday res... See more...
I have scheduled a Splunk report and set the search Time frame as Previous Week. The report I am getting is for Sunday to Saturday results. But I want the search to happen from Monday to Sunday results (Previous week). Please help here.
I have a search that counts  the vulnerabilities for a given team and places them on a Bar chart on a dashboard based on the "Risk" field to display how many Critical, High, medium or low events. P... See more...
I have a search that counts  the vulnerabilities for a given team and places them on a Bar chart on a dashboard based on the "Risk" field to display how many Critical, High, medium or low events. Problem I have is that not all teams have all 4 levels of vulnerabilities so the graphs look a bit rubbish. Some only have one level, others have 3 or 4 and the graphs only show the vulnerabilities that have a value I would like to always have Critical, High, Medium AND Low on the x-axis for every team even though the value for these may be Zero. For example, if a team has 5 Mediums, the graph only shows one bar. How to I create a Bar chart that shows: Critical =0 High=0 Medium =5 Low=0 Thanks