All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi at all, I tried to use the Alert Manager App on Splunk Cloud (it should be certified on Splunk Cloud) and it runs for the most features, but I’m receiving the following error: “A custom JavaScri... See more...
Hi at all, I tried to use the Alert Manager App on Splunk Cloud (it should be certified on Splunk Cloud) and it runs for the most features, but I’m receiving the following error: “A custom JavaScript error caused an issue loading your dashboard. See the Developer Console for more details” in many dashboards as: Reports – Stats Transictions Reports – Incident Export Settings – Incident Settings Settings -- Alert Status Settings – Drilldown Actions Settings – External Workflow Actions Settings -- User Settings Settings – eMail Templates In addition, what’s the Developer Console? In addition I haven’t any Alert result details in the Incident Posture. Are there know issues on Splunk Cloud? On Splunk Cloud I cannot access CLI, so how can I debug JS errors? Than you for your attention. Ciao. Giuseppe
Hi All, We have a universal forwarder running on Windows Server which is sending data to our Splunk Instance in Cloud. Below are some details of .conf files and logs: inputs.conf [default] host ... See more...
Hi All, We have a universal forwarder running on Windows Server which is sending data to our Splunk Instance in Cloud. Below are some details of .conf files and logs: inputs.conf [default] host = DB_DATA [monitor://D:\ABC\DB_Monitoring\Cust] disabled=0 index=rjsql sourcetype = csv crcSalt = <SOURCE> time_before_close = 60 props.conf [default] NO_BINARY_CHECK=true CHARSET=AUTO [source::D:\ABC\DB_Monitoring\Cust\*.csv] CHECK_METHOD = modtime There are some files which are either 1) no being indexed at all 2) only headers are indexed - this doesn't happen with all the files, only some of them. Logs from _internal (for file which has got only header indexed) Tailing processer file status btool output: Can you please suggest what else I could check here and resolve this intermittent issue? Thank you.
Hello. I am trying to use a drop down selector (as opposed to the time selector) in my dashboard to create a token with an @d value in %e-%b-%Y %T.%L. In other words, I want to provide users an opt... See more...
Hello. I am trying to use a drop down selector (as opposed to the time selector) in my dashboard to create a token with an @d value in %e-%b-%Y %T.%L. In other words, I want to provide users an option for "Today, Yesterday, This Week" and have that populate tokens with earliest (@d) & latest (@d+86399) value of that day. Ex: Input: User selects TODAY Token.start:  16-jun-2022 00:00:00.000 Token.end: 16-jun-2022 23:59:59.999 This should be straight forward, but for some reason, I am racking my brain trying to get this.
How do you tell how much you can ingest per day? I mean there needs to be a way to check to see what you're licensed for. Can't find this anywhere on the forums.   Also why isn't "Licensing" or "L... See more...
How do you tell how much you can ingest per day? I mean there needs to be a way to check to see what you're licensed for. Can't find this anywhere on the forums.   Also why isn't "Licensing" or "License" a label category?
Hi, I'm trying to remove blanks in a field when adding a csv file.  In heavy-forwarder I have tried to use a regex in props.conf and transforms.conf but the data continues to be entered with the bl... See more...
Hi, I'm trying to remove blanks in a field when adding a csv file.  In heavy-forwarder I have tried to use a regex in props.conf and transforms.conf but the data continues to be entered with the blank spaces in the fields. props.conf:     [blacklist] CHARSET=UTF-8 DATETIME_CONFIG=CURRENT INDEXED_EXTRACTIONS = csv KV_MODE = auto KV_TRIM_SPACES = true SEDCMD-blacklist = s/(^|\s)($|\s)//g TRANSFORMS-blacklist = blacklist_name NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Custom description = sourcetype para incorporar los datos de listas negras al índice disabled = false pulldown_type = true FIELD_DELIMITER = ;     Transforms.conf:     [blacklist_name] SOURCE_KEY = field:mltf_blacklist_name REGEX = ^\s*(\w+)(.*)\s FORMAT = $1 WRITE_META = true     I've been going through the documentation and I'm a bit lost with the splunk configurations. Any help will be appreciated. Thanks.
source="http:Emerson_P1CDN" | spath host | spath client_ip | spath status_code | where status_code=200 | spath referer | where referer="" | spath path | search path NOT ("*wcsextendedsearch" OR ... See more...
source="http:Emerson_P1CDN" | spath host | spath client_ip | spath status_code | where status_code=200 | spath referer | where referer="" | spath path | search path NOT ("*wcsextendedsearch" OR "*EmersonSKUListingView" OR "*EmersonProductListingView" OR "*CartRefreshStatusJSON" OR "*PriceAjaxView" OR "*AjaxSerialNumber" OR "*UnsupportedBrowserErrorView" OR "*LogonForm"OR "*MiniCart" OR "*MiniShopCartDisplayView" OR "*AnalyticsPageView" OR "*AjaxAccountLinkDisplay" OR "*.css" OR "*.js" OR "*.woff2" OR "*.woff" OR "*.gif" OR "*.png" OR "*.jpg" OR "*.ico" OR "*.pdf" OR "*.html" OR "*.txt" OR "*.xml" OR "*/ClickInfo" OR "*thumb") | bin _time span=1m | stats count by _time,host,path,client_ip | where count >= 100 | sort - count Does the query at the top is correct?, because we want to count the total events of _time,host,path and client_ip per minute
Current one that is working is: [fschange:F:\bau\box\quest] Need to specify it to: [fschange:F:\bau\box\quest\...\arch] Where quest has 5 folders, inside of each has a folder \arch But see... See more...
Current one that is working is: [fschange:F:\bau\box\quest] Need to specify it to: [fschange:F:\bau\box\quest\...\arch] Where quest has 5 folders, inside of each has a folder \arch But seems not working using \ ... \ or \*\ It is a forwarder and already restarted it as well.
Hello I'm running this query:   | union [ search host="puppet-01" OR host="jenkins-01" OR host="ANSIBLE-01" sourcetype=ProductionDeploy NOT Permisson_Job_Name=*_permission Environment=PRO... See more...
Hello I'm running this query:   | union [ search host="puppet-01" OR host="jenkins-01" OR host="ANSIBLE-01" sourcetype=ProductionDeploy NOT Permisson_Job_Name=*_permission Environment=PRODUCTION | table _time, App_Name, User, Change_Log_Description, Environment, Version] [ search sourcetype=mscs:storage:blob:json | rex field=_raw "Details\":\"(?<Details>.*?)\"," | rex field=_raw "ProjectName\":\"(?<ProjectName>.*?)\"," | rex field=_raw "ScopeDisplayName\":\"(?<ScopeDisplayName>.*?)\"," | rex field=_raw "releaseName\":\"(?<releaseName>.*?)\"}" | rex field=_raw "ActionId\":\"(?<ActionId>Release.ReleaseCreated)\"," | rex field=_raw "ActorUPN\":\"(?<ActorUPN>.*?)\"," | rex field=_raw "DeploymentResult\":\"(?<DeploymentResult>.*?)\"," | rex field=_raw "PipelineName\":\"(?<PipelineName>.*?)\"," | where releaseName != null AND PipelineName like "%Production" | rename ProjectName AS App_Name | rename ActorUPN AS User | rename releaseName AS Change_Log_Description | rename PipelineName AS Environment | rename DeploymentResult AS status | table _time, App_Name, User, Change_Log_Description, Environment, Version,status] | sort -_time asc   and im trying to get the status at the first search i don't have this value but i do have it at the second one i don't see status column at my results. can someone explain me why ? thanks
Hi Team, I am looking for how to pull license usage report for User Seat, Per Uptime and Per API Runs license metric either from CLI command or from Dashboard. Products for which I need to check li... See more...
Hi Team, I am looking for how to pull license usage report for User Seat, Per Uptime and Per API Runs license metric either from CLI command or from Dashboard. Products for which I need to check license usage reports: Splunk Threat Research Splunk Mission Control Splunk SOAR (Cloud)  Splunk On-Call Splunk User Behavior Analytics Splunk Real User Monitoring Splunk Synthetic Monitoring I am unable to find any documentation or commands to fetch reports for the products license usage. Can anyone please help me with this. Thanks, Avinash Kumar
Hello,  I have that limit of license indexation per day. So i wanted to limit data to be indexed from a specific Equipment. I received a great amount of logs from a source equipement using syslog (... See more...
Hello,  I have that limit of license indexation per day. So i wanted to limit data to be indexed from a specific Equipment. I received a great amount of logs from a source equipement using syslog (i can't change which types of logs to be sent to splunk). So, to limit the amount of data being indexed. i filtered data in the indexation phase using splunk. I added a regex in splunk so that splunk only indexes the wanted types of logs and ignore other received sylog logs from that specific equipment. I did this using TRANSFORMS-set in props.conf and using the regex expression in transforms.conf file.  As a result, i had the following errors in splunk health that i couldn't fix:  Ingestion Latency Events from tracker.log have not been seen for the last 2940 seconds, which is more than the red threshold (210 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked.  TailReader-0       The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data. Whenever i remove the regex expression the problem is solved => meaning that the regex is the only source of this problem/error.   Thank you in advance for help.
Can we download Splunk Unser Behavior Analytics? the ".OVA" file with a trial version or free??? 
Hi All, I have logs like below in splunk. log1: "count":1, log2: gcg.gom.esb_159515.rg.APIMediation.Disp1.3.Rs.APIM3 log3: "count":1, log4: gcg.gom.esb_159515.rg.APIMediation.Disp1.3.Rs.API... See more...
Hi All, I have logs like below in splunk. log1: "count":1, log2: gcg.gom.esb_159515.rg.APIMediation.Disp1.3.Rs.APIM3 log3: "count":1, log4: gcg.gom.esb_159515.rg.APIMediation.Disp1.3.Rs.APIM2 log5: "count":1, log6: gcg.gom.esb_159515.rg.APIMediation.Disp1.3.Rs.APIM1 I used the below query to create a table showing the "Queue" and the "Consumer count": ***** | rex field=_raw "Rs\.(?P<Queue>\w+)" | rex field=_raw "count\"\:(?P<Consumer_Count>\d+)\," | table Queue,Consumer_Count But this query gives the table in the below manner: Queue Consumer_Count   1 APIM3     1 APIM2     1 APIM1   I want the rows to be combined in the below manner: Queue Consumer_Count APIM3 1 APIM2 1 APIM1 1   Please help to modify the query to get the desired output. Thank you..!!
Hi Team, i have observed a strange behavior. Actually we had 'Splunk add-on for AWS' installed in IDM cloud node and indexers. Recently we asked Splunk cloud team to get the add-on installed on Sear... See more...
Hi Team, i have observed a strange behavior. Actually we had 'Splunk add-on for AWS' installed in IDM cloud node and indexers. Recently we asked Splunk cloud team to get the add-on installed on Search Heads as well. Post installation on SHs, we didn't get any alert/notable triggered for AWS which was there. We simply disable that add-on on SH and it started working. What could be the possible issue?   Thanks  
I have the event that looks like below    2022-06-15 19:59:57.489 threadId=L4GFP2275S1K class="ActiveSession" mname="NA" callId="NA" eventType="InMsg" data="<InfoNox_Interface xmlns:xsd=... See more...
I have the event that looks like below    2022-06-15 19:59:57.489 threadId=L4GFP2275S1K class="ActiveSession" mname="NA" callId="NA" eventType="InMsg" data="<InfoNox_Interface xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><TestRQ><Merchant_ID>testmid</Merchant_ID></TestRQ>" and I would like to remove below xml element with attribute from data fields , How can I do that ? <InfoNox_Interface xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> Results I want is  2022-06-15 19:59:57.489 threadId=L4GFP2275S1K class="ActiveSession" mname="NA" callId="NA" eventType="InMsg" data="<TestRQ><Merchant_ID>testmid</Merchant_ID></TestRQ>"
Hi all, I added a new monitor for a log file in inputs.conf and there were no errors in splunkd.log. However, it is not being ingested in Splunk, while it worked for other servers. May I know what... See more...
Hi all, I added a new monitor for a log file in inputs.conf and there were no errors in splunkd.log. However, it is not being ingested in Splunk, while it worked for other servers. May I know what configuration settings to check/compare between the problematic server and the working servers?   Regards, Zijian
I have the event that looks like below    2022-06-15 19:59:57.489 threadId=L4GFP2275S1K class="ActiveSession" mname="NA" callId="NA" eventType="InMsg" data="<InfoNox_Interface xmlns:xsd="http://w... See more...
I have the event that looks like below    2022-06-15 19:59:57.489 threadId=L4GFP2275S1K class="ActiveSession" mname="NA" callId="NA" eventType="InMsg" data="<InfoNox_Interface xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><TestRQ><Merchant_ID>testmid</Merchant_ID></TestRQ>" and I would like to remove below xml element with attribute from data fields , How can I do that ? <InfoNox_Interface xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> Results I want is  2022-06-15 19:59:57.489 threadId=L4GFP2275S1K class="ActiveSession" mname="NA" callId="NA" eventType="InMsg" data="<TestRQ><Merchant_ID>testmid</Merchant_ID></TestRQ>" @ITWhisperer 
Current one that is working is: [fschange:F:\bau\box\quest] Need to specify it to: [fschange:F:\bau\box\quest\...\arch] Where quest has 5 folders, inside of each has a folder \arch But see... See more...
Current one that is working is: [fschange:F:\bau\box\quest] Need to specify it to: [fschange:F:\bau\box\quest\...\arch] Where quest has 5 folders, inside of each has a folder \arch But seems not working using \ ... \ or \*\ It is a forwarder and already restarted it as well.
I am trying to pull up the Risk Event Timeline for a Risk Notable in my Incident Review Dashboard.   Every time I click the link, it gives me an error saying "Risk event has missing or invalid fields... See more...
I am trying to pull up the Risk Event Timeline for a Risk Notable in my Incident Review Dashboard.   Every time I click the link, it gives me an error saying "Risk event has missing or invalid fields".   I know that Risk Event Timeline only works for the risk_object field on Risk Notables. We have noticed a couple of issues that were related to Search-Driven lookups being disabled.  Might there be a lookup table that is referenced here that might be in the same boat? Is there somewhere that defines what fields are required in the Risk Notable? Any way to troubleshoot what is missing or incorrect?
Is there an option to drop older events from the pipeline? Older events can cause frequent bucket rolling and most likely not useful.
How can I write the following to get past the join limitation?     index=aws eventName=TerminateInstances | Rename "requestParameters.instancesSet.items{}.instanceId" AS vm_id | join vm_id type... See more...
How can I write the following to get past the join limitation?     index=aws eventName=TerminateInstances | Rename "requestParameters.instancesSet.items{}.instanceId" AS vm_id | join vm_id type=left max=0 [ search index=aws source="us-west-1:ec2_instances" sourcetype="aws:description" ] | dedup vm_id | table _time, action, vm_id, tags.Name, userName