All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Regarding Website Monitoring and the page "Status Overview" is it possible to change the value when it should change color?
Hi, I created an alert for monitoring orphaned enabled searches . It is getting saved searches for private searches of user who are deleted. Now i don't know how to get them deleted. My question... See more...
Hi, I created an alert for monitoring orphaned enabled searches . It is getting saved searches for private searches of user who are deleted. Now i don't know how to get them deleted. My question is that, should i be even worried about them. Since they are private searched and user is deleted it wouldn't be running. Also, is it is not running why is it showing enabled. If i should be worried how i can delete it. Since it is not showing in the place where all saved search are shown.?
Need help in RegEx output. Below is my _raw input "<MTIER><TID: 0000000181> <CATEGORY:com.remedy.log.WEBSERVICES > /* Wed Apr 22 2020 14:49:21.277 */ <LEVEL: FINE > <Class: com.reme... See more...
Need help in RegEx output. Below is my _raw input "<MTIER><TID: 0000000181> <CATEGORY:com.remedy.log.WEBSERVICES > /* Wed Apr 22 2020 14:49:21.277 */ <LEVEL: FINE > <Class: com.remedy.arsys.ws.services.ARService><Method: performOperation><TENANT: null > <USER: na\xsmoogitsm > input document: <?xml version=""1.0"" encoding=""UTF-8""?> <ROOT><Incident_Number>INC000013542725</Incident_Number><Work_Log_Type>General Information</Work_Log_Type><Locked>No</Locked><View_Access>Public</View_Access><Summary>AlertID: 171945364 : Open : USWNK-WANRTC001 : Node is down. Class:Custom Trap Host: uswnk-wanrtc001.</Summary><Notes>AlertID: 171945364 : Open : USWNK-WANRTC001 : Node is down. Class:Custom Trap Host: uswnk-wanrtc001.sa.xom.com External ID: 34217579 Tier: 3 Impact: 2-Significant/Large Urgency: 2-High</Notes><z2AF_Attachment1_attachmentOrigSize>0</z2AF_Attachment1_attachmentOrigSize><z2AF_Attachment2_attachmentOrigSize>0</z2AF_Attachment2_attachmentOrigSize><z2AF_Attachment3_attachmentOrigSize>0</z2AF_Attachment3_attachmentOrigSize></ROOT>" I want below content to be filtered. Class:Custom Trap Open : USWNK-WANRTC001 : Node is down. Host: uswnk-wanrtc001.sa.xom.com Impact: 2-Significant/Large Urgency: 2-High INC000013542725 AlertID: 171945364 Wed Apr 22 2020 14:49:21.277 expected table output. In tabular There are spaces and punctuations were am facing challenges. Please help me with this. Few of the field I tried to capture. | rex field=_raw "Incident_Number\W(?<ITSM_Number>.*)\W\WIncident_Number\W.*" | rex field=_raw "(Host:\s)(?<Hostname>[^\.<]+\.)" | rex field=_raw "(Urgency:\s)(?<Urgency>\S-\D*[{lmwh}$])" | rex field=_raw "(AlertID:\s)(?<AlertID>[^\D*]+)" | rex field=_raw "(Open\s:\s)(?<Description>[^\.*]+)"
Hi, As soon as data moves from cold to frozen bucket it gets deleted? How data moves from frozen bucket to Thawed bucket. The data in thawed bucket is that searchable? How long data will be in thaw... See more...
Hi, As soon as data moves from cold to frozen bucket it gets deleted? How data moves from frozen bucket to Thawed bucket. The data in thawed bucket is that searchable? How long data will be in thawed bucket? will that move back to frozen bucket again? If we need the data for years where and how to store it?
Hello, I've gone through a hundred of these types of posts and nothing is working for me. Here is the nested json array that I would like to split into a table of individual events, based on the com... See more...
Hello, I've gone through a hundred of these types of posts and nothing is working for me. Here is the nested json array that I would like to split into a table of individual events, based on the computer.hardware.storage.device.partition{} and computer.general.name . Once I have these split into individual events, I would like to only put the 'boot' device event in the table. { "computer": { "general": { "name": "woohoo-l3" }, "hardware": { "storage": { "device": { "partition": [ { "name": "Macintosh HD (Boot Partition)", "type": "boot", "filevault_status": "Encrypted", "filevault_percent": "100", }, { "name": "Recovery", "type": "other", "filevault_status": "Not Encrypted", "filevault_percent": "0", } ] } } } } } I have come up with the following search but it does not do what I want. I've been messing with this all day and I'm stuck. Any help would be greatly appreciated! index=sec-inventory sourcetype="jamf-computers" "c02z912nlvdl" | spath | rename computer.hardware.storage.device{}.partition.filevault_status as filevault_status | rename computer.hardware.storage.device.partition{}.type as partitiontype | rename computer.general.name as computername | eval zipped=mvzip(filevault_status, partitiontype) | mvexpand zipped | eval zipped=split(zipped, ",") | eval filevault_status=mvindex(zipped, 0) | eval type=mvindex(zipped, 1) | fillnull value="null" | table computername, partitiontype, filevault_status | search partitiontype="boot" The table should look like
I was trying to test IT Shared content pack, downloaded, restored, We see glass table deployed, services deployed, but when we look at one of service, such as NTP. There is no preconfigured base s... See more...
I was trying to test IT Shared content pack, downloaded, restored, We see glass table deployed, services deployed, but when we look at one of service, such as NTP. There is no preconfigured base search(es) KPI, only one: Heartbeat, is this expected? do we miss something ? If we deploy content pack Monitoring Unix and Linux, we see preconfigured base search with many KPIs, and we see it looks like it has correct base search,
Hey guys, Slightly new to Splunk, I have done a few searches in my time however, I am currently stuck on dropdowns. I have a search that is dependent on 3 dropdown inputs 2 of which are closely re... See more...
Hey guys, Slightly new to Splunk, I have done a few searches in my time however, I am currently stuck on dropdowns. I have a search that is dependent on 3 dropdown inputs 2 of which are closely related to each other. 2 of which are my main concern. To put it simply I have 1 dropdown entry which is the stand "Time" input and a second one which is "Username". What I want to be able to do is restrict the search of usernames based on what time frame the user enters on the "Time" input. I just cannot seem to get it to work and the token $time$ I have no way of entering this in? See below the code and screenshot to provide context. Please ask any questions that might assist in me solving this. I've tried also adding in a premade time entry input panel although I'd prefer to use the default due to more presets. <form theme="dark"> <label>HealthRoster Actions Clone (Ajays)</label> <search id="base_search"> <query>sourcetype=iis | table a_action,a_action_type,cs_User_Agent,a_module,a_request,a_process_action,a_module_detail,a_request_type,app,LHD,cs_username, _time </query> </search> <fieldset submitButton="false" autoRun="false"> <input type="time" token="date" searchWhenChanged="false"> <label>Time &amp; Date</label> <default> <earliest>-15m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="LHD" searchWhenChanged="true"> <label>LHD</label> <search base="base_search"> <query>| stats count by LHD</query> </search> <fieldForLabel>LHD</fieldForLabel> <fieldForValue>LHD</fieldForValue> </input> <input type="dropdown" token="username" searchWhenChanged="true"> <label>Username</label> <fieldForLabel>username</fieldForLabel> <fieldForValue>cs_username</fieldForValue> <search base="base_search"> <query>| search LHD=$LHD$ | stats count by cs_username</query> </search> </input> <input type="dropdown" token="time"> <label>Time Entry</label> <choice value="-30m@m">Last Half Hour</choice> <choice value="-60m@m">Last 1 hour</choice> <choice value="-240m@m">Last 4 hours</choice> <choice value="-24h">Last 24 hours</choice> <choice value="-48h">Last 2 days</choice> <choice value="-10d@d">Last 10 days</choice> </input> </fieldset> <row> <panel> Thanks
I am trying to use invalid_replication_address to tell a cluster master running in front of an ELB to contact the indexer on a different address. However when i try to add the peer I get the followin... See more...
I am trying to use invalid_replication_address to tell a cluster master running in front of an ELB to contact the indexer on a different address. However when i try to add the peer I get the following error on the CM: REST_Calls - app=search POST cluster/master/peers/ id=526D8BF5-7412-4934-AC47-08C699290CC9: active_bundle_id -> [14310A4AABD23E85BBD4559C4A3B59F8], add_type -> [Initial-Add], base_generatio n_id -> [0], batch_serialno -> [1], batch_size -> [2], buckets -> [], forwarderdata_rcv_port -> [9997], forwarderdata_use_ssl -> [0], indexes -> [], last_complete_generation_id -> [0], latest_bundle_id -> [14310A4AABD23E85BBD 4559C4A3B59F8], mgmt_port -> [8089], register_forwarder_address -> [], register_replication_address -> [https://10.0.7.181:8089], register_search_address -> [], replication_port -> [9887], replication_use_ssl -> [0], replicat ions -> [], server_name -> [ip-10-0-7-181.ca-central-1.compute.internal], site -> [default], splunk_version -> [8.0.2], splunkd_build_number -> [a7f645ddaf91], status -> [Up] INFO AdminManager - Setting capability.write=edit_indexer_cluster for handler clustermasterpeers. INFO AdminManager - Setting capability.read=edit_indexer_cluster for handler clustermasterpeers. DEBUG AdminManager - Validating argument values... DEBUG AdminManagerValidation - Validating rule='validate(len(name) < 1024, 'Parameter "name" must be less than 1024 characters.')' for arg='name'. ERROR ClusterMasterPeerHandler - Invalid host name https://10.0.7.181:8089 DEBUG AdminManager - URI /services/cluster/master/peers/?output_mode=json generated an AdminManagerExceptionBase exception in handler 'clustermasterpeers': Invalid host name https://10.0.7.181:80 89 INFO CMSlave - event=addPeer status=failure shutdown=false request: AddPeerRequest: { _id= _indexVec=''active_bundle_id=14310A4AABD23E85BBD4559C4A3B59F8 add_type=Initial-Add base_generation_id=0 batch_serialno=1 batch_size=2 forwarderdata_rcv_port=9997 forwarderdata_use_ssl=0 last_complete_generation_id=0 latest_bundle_id=14310A4AABD23E85BBD4559C4A3B59F8 mgmt_port=8089 name=526D8BF5-7412-4934-AC47 08C699290CC9 register_forwarder_address= register_replication_address=https://10.0.7.181:8089 register_search_address= replication_port=9887 replication_use_ssl=0 replications= server_name=ip-10-0-7-181.ca-central-1 compute.internal site=default splunk_version=8.0.2 splunkd_build_number=a7f645ddaf91 status=Up } 04-23-2020 02:03:56.478 +0000 ERROR CMSlave - event=addPeer start over and retry after sleep 12800ms reason addType=Initial Add Batch SN=1/2 failed. add_peer_network_ms=5 Notice how it says something regarding the name being less than 1024 characters and it possibly failing validation? The Cluster Master can "resolve" the IP ..although its an IP so see no reason why it should resolve it although the "null" cant resolve is weird.. I added a hostfile..no diffference: ` nslookup: can't resolve '(null)' Name: 10.0.7.181 Address 1: 10.0.7.181 ip-10-0-7-181.ca-central-1.compute.internal The Clustermaster can reach the Indexer on that port: Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connected to 10.0.7.181:8089. ` Any reason why this happens? I've read a few posts and register_replication_address seems to be the solution to my problem however i am unsure why it is "unable to resolve hostname" *** UPDATE *** I also want to add here i've been doing more testing on some nodes that are just two EC2 instances with all traffic allowed between each other. nslookup on AWS for the IPs are fine and I still cannot get this working. If i remove register_replication_address in these cases it will work fine..this is really weird. Im not sure what the issue is or how to troubleshoot further if the log just says "invalid hostname"
Hello, I have query which joins across 4 sources and correlationid may or may not exists across all sources, I want to print the id and exists or not exists on each source, wrote the query below and ... See more...
Hello, I have query which joins across 4 sources and correlationid may or may not exists across all sources, I want to print the id and exists or not exists on each source, wrote the query below and actual results only shows one source where value is present and does not show other 3 sources, any ideas how can this query be modified to show expected results ? Query - sourcetype=API_HUB OR sourcetype=API_RISK OR sourcetype=API_SECURE OR sourcetype=API_PAYMENT | eval CorrelationId = coalesce(CorrelationId,"Not exists") | table sourcetype,CorrelationId | dedup CorrelationId,sourcetype | sort(sourcetype,CorrelationId) Actual results sourcetype CorrelationId API_RISK 123456-34344-55555 Expected results sourcetype CorrelationId API_RISK 123456-34344-55555 API_PAYMENT Not exists API_SECURE Not exists API_HUB Not exists
Hi Experts, Please suggest how to join two Splunk index output. I have two indexes in first index i want to fetch only that data that occurred in holiday list occurrence-date field index=mon... See more...
Hi Experts, Please suggest how to join two Splunk index output. I have two indexes in first index i want to fetch only that data that occurred in holiday list occurrence-date field index=monitor index=holiday_list earliest=0 latest=now| dedup OCCURRENCEDATE| table OCCURRENCEDATE
We currently use the REST API's ability to export/import custom rules defined in the APM module for our applications. How do we perform this for EUM based custom rules for Base Pages, AJAX, iFrames, ... See more...
We currently use the REST API's ability to export/import custom rules defined in the APM module for our applications. How do we perform this for EUM based custom rules for Base Pages, AJAX, iFrames, & Virtual Pages? 
We use an Internet traffic security provider (i.e. Akamai, Cloudflare, etc...) to monitor our inbound traffic to mitigate and prevent malicious actions against our sites. The tool uses an array of at... See more...
We use an Internet traffic security provider (i.e. Akamai, Cloudflare, etc...) to monitor our inbound traffic to mitigate and prevent malicious actions against our sites. The tool uses an array of attributes, such as source IP Address, User-Agent, and demographics in its analysis. It also uses its own internal machine learning to flag & alert potential concerns based on known bot types and reputations. We also use End User Experience Synthetics for most of our external facing platforms, to validate availability and functionality. These are run from various AppDynamics hosted sites, not internally hosted. How do we whitelist our synthetics run from AppDynamics hosted sites, to ensure that they are not unintentionally captured and flagged as potential bad actors?
I'm trying to mask out of the log below and I'm not sure what I'm doing wrong. log: [22/Apr/2020:19:29:57 -0400] MODIFY INT conn=88927 op=65 msgID=66 PLUGIN=Modify: modifications: [Modificatio... See more...
I'm trying to mask out of the log below and I'm not sure what I'm doing wrong. log: [22/Apr/2020:19:29:57 -0400] MODIFY INT conn=88927 op=65 msgID=66 PLUGIN=Modify: modifications: [Modification(replace, Attribute(userPassword, {userpassword})] my props.conf [source::mysourcetype] SEDCMD-mask = ^\[.+\sPLUGIN=Modify: modifications: \[Modification\(replace, Attribute\(userPassword, {.+}(.+)/(userPassword, {#####}\g/1 But, it's not matching and replacing the password with the #####'s
All, Setting up a Splunk instance and in the past I used a load balancer that handled certs for me. But this instance is going to terminate directly on the Splunk instance. Getting letsencrypt wo... See more...
All, Setting up a Splunk instance and in the past I used a load balancer that handled certs for me. But this instance is going to terminate directly on the Splunk instance. Getting letsencrypt working on Splunk web was trivial about 20 minutes of time. But for the life of me I can't get it working with the HTTP Event Collector (HEC). Any walk through or tutorials on cert management on HEC, ideally with letsencrypt. Thought I could just point at the same files as Splunk web but doesn't seem to work that way. thanks -Daniel
The documented workaround doesn't make sense. An aggregation policy cannot be triggered to break a group on a group by group basis. It's all or nothing. Any advice is helpful.
I installed Splunk App for Web Analystics, the datamodel status (of WebAnalytics) shows in "Building"; however, the status was never updated and there was no data ingested to the datamodel although t... See more...
I installed Splunk App for Web Analystics, the datamodel status (of WebAnalytics) shows in "Building"; however, the status was never updated and there was no data ingested to the datamodel although there was data available within the datamodel constraints. Any clues? Thanks.
I have an issue , I am trying to export some data using DB connect to SQL database. If i have 48,000 rows and 362 columns i am getting only 41000 rows to be loaded in database but if i reduce the num... See more...
I have an issue , I am trying to export some data using DB connect to SQL database. If i have 48,000 rows and 362 columns i am getting only 41000 rows to be loaded in database but if i reduce the number of columns from 362 to 300 i am getting all the rows loaded in database. I am thinking its hitting some limits , If any one can help on this will be great.
Hello, I configured the UF to monitor a JSON file in a specific directory but its not forwarding it to the indexers the output is working properly as there are files being sent to indexers ... See more...
Hello, I configured the UF to monitor a JSON file in a specific directory but its not forwarding it to the indexers the output is working properly as there are files being sent to indexers here is my input file [monitor://C:\temp*.json] index=test1 sourcetype=test_styp my props [test_styp] INDEXED_EXTRACTIONS =json SHOULD_LINEMERGE=false NO_BINARY_CHECK=true TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%3N+%4N TIME_PREFIX="observedTime":" MAX_TIMESTAMP_LOOKAHEAD=28 the splunk logs is stating the following " Adding watch on path splunk [monitor://C:\temp*] but nothis being ingested i tried running this SPL search on my SH to check if something related to JSON extraction is but nothing returned test_styp | rex "incoming=\"(?.+)\", transformed=" | spath = incoming Could you please help ?
I have multiple events in a server that I would like to get the timestamp from the very first transaction and the timestamp from the very last transaction for each feature, then get the timestamp dif... See more...
I have multiple events in a server that I would like to get the timestamp from the very first transaction and the timestamp from the very last transaction for each feature, then get the timestamp difference between them in hours, in a table format. My Json looks like: --Multiples Begin events like this { VacuumTaskStepFunctionBegin: { NumberOfCustomers: 55 QueueURL: https://xxxxxxxxxx.fifo } } --Multiples finish events like this { VacuumTableFinish: { DbName: xxxxx DbSize: 55 GB QueryTimeApproxSeconds: 20 ServerName: xxxxx TableSize: 100 MB Query: Select * from xxxxxxx; } } I am looking for a result like this: Function | Startime | Endtime | TimeProcessing | ServerCount | DB Count VacuumTask | 03-04-2020 08:00 am| 03-05-2020 08:00 am| 24 hours | 10 | 55 But also I have more functions like this for other features so my end table would like this: Function | Startime | Endtime | TimeProcessing | ServerCount. | DB Count VacuumTask | 03-04-2020 08:00 am | 03-05-2020 08:00 am. | 24 hours. | 10 | 55 Importer | 03-04-2020 08:00 am | 03-05-2020 09:00 am | 1 hour | 20 | 35 Lambda |03-04-2020 08:00 am | 03-04-2020 20:00 am | 12 hours | 15 | 20 And so on. All events I have look like the JSON I posted below. Thank you in advance.
Got an alert for a HF restarting and trying to find the root cause of unexpected restart. I'm using the search below and the results shown are at the start of the event which led to the "Starting Spl... See more...
Got an alert for a HF restarting and trying to find the root cause of unexpected restart. I'm using the search below and the results shown are at the start of the event which led to the "Starting Splunk server daemon (splunkd)... " alert index=_internal source="/opt/splunk/var/log/splunk/splunkd_st*" host=MYHF Results: 4/22/20 1:14:38.000 PM Checking prerequisites... 4/22/20 1:14:38.000 PM Splunk> The IT Search Engine. 4/22/20 1:14:38.000 PM splunkd is not running. [FAILED] 4/22/20 1:14:34.824 PM 2020-04-22 13:14:34.824 -0400 splunkd started (build 6db836e2fb9e) pid=25388 4/22/20 1:14:34.000 PM Bypassing local license checks since this instance is configured with a remote license master. Is there anywhere I could look that could give a more specific reason as to why the HF restarted? Thanks in advance