All Topics

Top

All Topics

Hello everyone, I was just curious if there are any sure fire best practices for health rules pertaining to business transactions? Since I have started using AppD, I am used to using 2 standard devi... See more...
Hello everyone, I was just curious if there are any sure fire best practices for health rules pertaining to business transactions? Since I have started using AppD, I am used to using 2 standard deviations as a warning and 3 standard deviations as a critical. Of course every App is different and there are nuances.  I was just wondering if anyone had a good rule of thumb they like to use, or if current best practices are documented anywhere? Thanks
I am looking for App that can be helpful to create a full pic of the organization  (domains such as network/endpoint/email/fw ) we are collecting the data from different sources (syslog , Arcsight,a... See more...
I am looking for App that can be helpful to create a full pic of the organization  (domains such as network/endpoint/email/fw ) we are collecting the data from different sources (syslog , Arcsight,apps )
I upgraded from 7.2 to 8.0 and then 8.0 to 8.2 After the upgrade to our distributed deployment, I am getting bombarded with email Health Alerts. "sum_top3_cpu_percs__max_last_3m"  is red due to t... See more...
I upgraded from 7.2 to 8.0 and then 8.0 to 8.2 After the upgrade to our distributed deployment, I am getting bombarded with email Health Alerts. "sum_top3_cpu_percs__max_last_3m"  is red due to the following: "Sum of 3 highest per-cpu iowaits reached red threshold of 15" "avg_cpu__max_perc_last_3m" is red due to the following: "System iowait reached red threshold of 3" "single_cpu__max_perc_last_3m" is red due to the following: "Maximum per-cpu iowait reached red threshold of 10"   I was getting them on my Indexers yesterday but this morning it seems to be our Enterprise Security SH, our Deployment Server,  and our regular Search Head.   I am unable to disable these alerts due to our Company's policy.    What can I do to either a.) resolve this cpu/iowait issue or b.) change the alert settings? I don't notice a difference in performance. I'm just curious as to what's causing this CPU usage spike? Because it seems to me - as in the example of avg cpu max percent if the CPU usage is above 3%, it is going to alert me?
Hello All, We have data coming in as part of HEC ingestion in Splunk. And I would need help to extract fields either be it search time or index time. we need line breaking and field extractions Bel... See more...
Hello All, We have data coming in as part of HEC ingestion in Splunk. And I would need help to extract fields either be it search time or index time. we need line breaking and field extractions Below is the sample : INFO 2021-10-27 07:31:00,004 [[MuleRuntime].io.4090: [bcom_membermasterbatch1].schedulerjobstatusFlow.BLOCKING @7a0bb47e] d4fff913-36f7-11ec-ba0c-11010ad55507org.mule.extension.jsonlogger.JsonLogger: { "correlationId" : "e4ggf523-27h7-11ec-ba0c-33333ad55333", "message" : "no key retrived", "tracePoint" : "START", "priority" : "INFO", "elapsed" : 0, "locationInfo" : { "lineInFile" : "222", "component" : "json-logger:logger", "fileName" : "schedulerjobstatus.xml", "rootContainer" : "schedulerjobstatusFlow"
  eval _raw = msg | rex "InputAmountToCredit\"\:\"(?<PayloadAmount>[^\"]+)" | rex "Request\#\:\s*(?<ID1>\d+) with (?<Status>\w+.\w+)" | rex "CRERequestId\"\:\"(?<ID2>[^\"]+)" | eval ID=coalesce(ID1,... See more...
  eval _raw = msg | rex "InputAmountToCredit\"\:\"(?<PayloadAmount>[^\"]+)" | rex "Request\#\:\s*(?<ID1>\d+) with (?<Status>\w+.\w+)" | rex "CRERequestId\"\:\"(?<ID2>[^\"]+)" | eval ID=coalesce(ID1,ID2) | stats latest(Status) as Status values(PayloadAmount) as Amount by ID| stats count(list()) by Status| eval _time=relative_time(now(),"-1d@d")|  
Hello,  We are using ES and we have a lookup file downloaded which has a mix of standalone ip's and CIDRs/Subnets/.  The CSV file has 3 columns :  Description, ip, time I want to match dest_ip fro... See more...
Hello,  We are using ES and we have a lookup file downloaded which has a mix of standalone ip's and CIDRs/Subnets/.  The CSV file has 3 columns :  Description, ip, time I want to match dest_ip from my search results to any of those IPs in the lookup table ( Column "ip")  and if any matches, the results  should be displayed in a table format I am using this search          | makeresults count=2 | streamstats count | eval src_ip = case(count=1,"1.2.3.4", count=2,"2.3.4.5") | eval dst_ip = case(count=1 OR count=2, "1.234.65.61") | lookup ip_reputation_list ip as dst_ip OUTPUT ip | table _time ip            Is the above correct?  The problem i am facing is even if no IPs match, the search results still shows me the _time column  and ip column (as empty) .    How to get the search NOT show any results if IPs don't match in the lookup ,  and secondly can i compare IP with a CIDR in the lookup ?  
Hi Friends, I have integrated ServiceNow to Splunk through Splunk add-On for ServiceNow, I want to input "Cases" but it is not listed in the Input tab. I can see the incident input but I don't nee... See more...
Hi Friends, I have integrated ServiceNow to Splunk through Splunk add-On for ServiceNow, I want to input "Cases" but it is not listed in the Input tab. I can see the incident input but I don't need that. I want to input the Cases/tickets created by customers. Please help at the earliest. Thank you. @splunk  
My lookUp is a KV Store lookup.  It has three column  'is_active' , 'user', 'robot'. I have a SPL query that gives me more information about the user. And I want to enrich the lookup with addition... See more...
My lookUp is a KV Store lookup.  It has three column  'is_active' , 'user', 'robot'. I have a SPL query that gives me more information about the user. And I want to enrich the lookup with additional coulmns from that SPL output. The SPL is , index=population sourcetype=bsassioan | table age, gender, email, user_name. user_name in the same field as 'user' from the lookup. I want to update my Kvstore lookup such as it should contain columns such as is_active, user, robot, age, gender, email  by matching the user_name with user field.   If the match is not found , let the field be empty and I don't want to override anything in the lookup, just addition of new columns and null/empty fileds (no overwrite) if user_name / user match is not found.
Hello *, I am looking for an SPL that reads the first part of a string via regex and replaces all occurrences of a certain character with this first part. This is to be used for summary indexing. ... See more...
Hello *, I am looking for an SPL that reads the first part of a string via regex and replaces all occurrences of a certain character with this first part. This is to be used for summary indexing.  Example:   1;A__B 2;B__C__D__A__E 3;G   is to be transformed into (each line will become a value in a multivalue field):   1;A 1;B 2;B 2;C 2;D 2;A 2;E 3;G   Neither replace nor rex seem to be able to afford multiple replacements of this kind. I also tried foreach with some field extractions but failed. Before I write a custom search command for it, I hope for your ideas to solve the problem with some clever standard SPL.   Thank you in advance for your thoughts!
I currently have a Splunk cluster that looks like this: Splunk CentOS Version Splunk Version Master 7.5 7.0.0 Forwarder 7.5 Universal Forwarder 6.6.3 Search Head 6.5 7.0.0 Ind... See more...
I currently have a Splunk cluster that looks like this: Splunk CentOS Version Splunk Version Master 7.5 7.0.0 Forwarder 7.5 Universal Forwarder 6.6.3 Search Head 6.5 7.0.0 Indexer 1 6.5 7.0.0 Indexer 2 6.5 7.0.0 Indexer 3 6.5 7.0.0 Indexer 4 6.5 7.0.0   I have 4 new, better servers that I want to build as indexers to replace the 4 indexers that I currently have (while moving from Splunk 7.0.0 to Splunk 8.x in the process). It seems to me that I have two options: Option 1 Create a new indexer cluster with the 4 new indexers, so that I have an old indexer cluster and a new indexer cluster.  Send new data to the new indexer cluster only and in time the data on the old indexer cluster will age off and I can retire that cluster. During this time I will configure the search head to search across both indexer clusters so that the old and new data is searchable. Option 2 Add the new indexers to the existing indexer cluster, so that I have 4 old indexers and 4 new indexers in the same cluster. Allow the data to replicate across the 8 node cluster, and then retire the old indexers one by one. Leaving me with the 4 new indexers only. In both scenarios above the old indexers will be CentOS 6.x and the new indexers will be CentOS 7.x, but as I understand it this won't be a problem. Would anyone be able to offer any advice on which method would be best/easiest or point out any pitfalls or anything obvious that I may not have considered?
We are receiving the same event over multiple notables. We would like to have a way to stop the duplicate events or try to remove them from being generated in two different notables. Any Suggestions ?
Hi All,   I am getting the below error in our SHC. Unable to initialize modular input "checkpoint_opseclea" defined in the app "Splunk_TA_checkpoint-opseclea": Introspecting scheme=checkpoint_opse... See more...
Hi All,   I am getting the below error in our SHC. Unable to initialize modular input "checkpoint_opseclea" defined in the app "Splunk_TA_checkpoint-opseclea": Introspecting scheme=checkpoint_opseclea: script running failed (exited with code 1).. Also, unable to load the Inputs and Configuration page of the app.   Splunk_TA_checkpoint-opseclea version is 4.3.1 Splunk enterprise version is 8.1.2 In the below link, splunk enterprise version 8.1.2 is not listed in the compatible version list of the app Splunk_TA_checkpoint-opseclea. https://docs.splunk.com/Documentation/AddOns/released/OPSEC-LEA/Releasehistory This issue is not there in the knownissue. Some of the HFs also have 4.3.1 version of the app, but, there the input and configuration pages are loading. Is it because of the version compatibility or any other reason is there
Hello everyone, I have tried looking a lot for the right solution for this but I am not able to understand it right. I am very new to splunk and I am looking at trying to edit the font size and colo... See more...
Hello everyone, I have tried looking a lot for the right solution for this but I am not able to understand it right. I am very new to splunk and I am looking at trying to edit the font size and color for my initial label(at the very top) of my dashboard. I was able to change the font size and color for the individual panels I built, however I have had no luck with the editing the label. I have pasted the code below to give a better understand of what I am working with.   <form theme="dark"> <label>Trial Dashboard</label>     --->(i am trying to format the font size and color here) <description>Select the time limit</description> <fieldset submitButton="false" autoRun="true"> <input type="time" token="time_tok" searchWhenChanged="true"> <label>Time</label> <default> <earliest>-30d@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel id="trial1"> <title>Sample Pannel</title> <single> <search> <query>sample_query_in_here</query> <earliest>$time_tok.earliest$</earliest> <latest>$time_tok.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>300s</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="height">249</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="refresh.display">progressbar</option> <option name="useColors">1</option> </single> <html> <style> #trial1 .dashboard-panel h2 { font-size: 28px !important; color:#9000ff !important; font-weight: bold !important; } </style> </html> </panel> </row> </form>   2nd line from top "Trial Dashboard" is the label that I am trying to change. I hope to find a solution similar to the way I added the formats to the panels as it was easy to have it all under the same xml source code. Thank you in advance, Rookie99
Hi Here is th e log: 2021-10-26 08:17:19,117 WARN AbCD-App2-0000 [SqlExceptionHelper] SQL Error: -268, SQLState: 23000 2021-10-26 08:01:47,674 WARN CUS.AbCD-APP2-0000000 [SqlExceptionHelper] SQL E... See more...
Hi Here is th e log: 2021-10-26 08:17:19,117 WARN AbCD-App2-0000 [SqlExceptionHelper] SQL Error: -268, SQLState: 23000 2021-10-26 08:01:47,674 WARN CUS.AbCD-APP2-0000000 [SqlExceptionHelper] SQL Error: -100, SQLState: IX000   expected output: Error    Status 268     23000 100      IX000   Thanks
Hi, We are integrating phantom with splunk using below doc https://docs.splunk.com/Documentation/PhantomRemoteSearch/1.0.14/PhantomRemoteSearch/Connecttosinglesplunk As per the doc we create two u... See more...
Hi, We are integrating phantom with splunk using below doc https://docs.splunk.com/Documentation/PhantomRemoteSearch/1.0.14/PhantomRemoteSearch/Connecttosinglesplunk As per the doc we create two users - phantomsearchuser​ and ​phantomdeleteuser May I know why these users are created?? what they will do? Also, as per this --> https://splunkbase.splunk.com/app/4399/#/details   (A userid is created at phantom end and it is added to below lookup in splunk) --> I have created it as phantom_test at the phantom end. May I know why this user is required? 4- Edit phantomusers.csv file under <Splunk>/etc/apps/splunk_app_phantom/lookups and add new entries. Each entry should map the phantom userid to the phantom username. You can get the userids/username mapping from your Phantom instance under Administration -> User Management -> Users and click on each individual user to get the userid. Totally I have 3 users (2 users created in splunk and 1 user created in phantom)  
I am about to upgrade the Security Essentials App (Installed on ES) to it's most current version 3.4.0. I read that Security Essentials depends on ES content update App. The question is do I need to ... See more...
I am about to upgrade the Security Essentials App (Installed on ES) to it's most current version 3.4.0. I read that Security Essentials depends on ES content update App. The question is do I need to update this to it's latest version as well? Thank u very much for your response in advance.
I'm trying to configure a cloudwatch logs input but I continue to receive invalid key errors when restarting Splunk on the HF. I've gone by the doc as well as opened a support case but haven't had su... See more...
I'm trying to configure a cloudwatch logs input but I continue to receive invalid key errors when restarting Splunk on the HF. I've gone by the doc as well as opened a support case but haven't had success. The samples that I've seen and the keys that the support guy were telling me to try are all different than the documentation. I'm starting to think this input type is just bugged and doesn't work at all. Has anyone been able to configure this type of input? And if so, do you mind sharing what you did?   Errors   Invalid key in stanza [aws_cloudwatch_logs://REDACTED] in /opt/splunk/etc/apps/Splunk_TA_aws/local/inputs.conf, line 34: account (value: REDACTED). Invalid key in stanza [aws_cloudwatch_logs://REDACTED] in /opt/splunk/etc/apps/Splunk_TA_aws/local/inputs.conf, line 35: groups (value: /blah/blah/redacted, /blah/more/redacted). Invalid key in stanza [aws_cloudwatch_logs://REDACTED] in /opt/splunk/etc/apps/Splunk_TA_aws/local/inputs.conf, line 38: region (value: us-east-1).     Details Splunk Enterprise 8.1.4 AWS add-on 5.0.3 Installed on Amazon Linux Doc:  https://docs.splunk.com/Documentation/AddOns/released/AWS/CloudWatchLogs My current stanza   [aws_cloudwatch_logs://REDACTED] account = REDACTED groups = /blah/blah/redacted, /blah/more/redacted index = staging sourcetype = aws:cloudwatchlogs:log interval = 1800 region = us-east-1 disabled = 0      
hello, Can anyone tell me how to exclude the subsearch result from main search? I want to exclude the result that failed at 1st attempt, but later the person purchased successfully. I only want to... See more...
hello, Can anyone tell me how to exclude the subsearch result from main search? I want to exclude the result that failed at 1st attempt, but later the person purchased successfully. I only want to capture PURCHASEID(s) that failed and has not been able to purchase yet. Here's my pseudo code that I am trying to accomplish index=main sourcetype="access_combined_wcookie" AND ("*TIME_OUT*") | rex field=_raw "\[(?<PURCHASEID>\d{12}\-\d{3})\]" | search NOT [ search index=main sourcetype="access_combined_wcookie" AND ("*Successfully Ordered*")" | rex field=_raw "\[(?<PURCHASEID>\d{12}\-\d{3})\]" | table PURCHASEID] | table PURCHASEID, _raw | dedup PURCHASEID | sort +PURCHASEID
I am able to collect object data when using Splunk add-on for salesforce but I am not lucky to collect events. I have checked permission and also Eventlog files are listed in my org when using curl o... See more...
I am able to collect object data when using Splunk add-on for salesforce but I am not lucky to collect events. I have checked permission and also Eventlog files are listed in my org when using curl or ELF browser (app in Heroku) but not able to collect events in Splunk. My only guess is that I need the Event Monitoring Licenses in SF to use this add-on, can you confirm?
It is necessary to check if the user is in the index in this file or not. If not, then add to the file, if it is in the file, then nothing happens file.csv  username info1 info2 john abcd ... See more...
It is necessary to check if the user is in the index in this file or not. If not, then add to the file, if it is in the file, then nothing happens file.csv  username info1 info2 john abcd qwer   index = IndexName username info1 info2 Aram ghjk qweiq   Condition, if the user is not found in the file, then write it to the file  output  username info1 info2 john abcd qwer Aram ghjk qweiq