All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello All, I am new to Splunk. My Splunk index is already getting data from a Kafka source   index=k_index sourcetype=k_message The query result is something like {Field1=abc,Field2=sdfs,Fi... See more...
Hello All, I am new to Splunk. My Splunk index is already getting data from a Kafka source   index=k_index sourcetype=k_message The query result is something like {Field1=abc,Field2=sdfs,Field3=wertw,Field4=123,Field6=87089R....}   I have got a use case where I have a list of fields and associated datatypes,  I want to compare these predefined fields (fields only - no values) against the Splunk search query results and then for each mismatch in the result, needs to keep count of it and produce it as a percentage of the total. In short, give a score if the incoming events in the last 15 mins are good (like 100% or 90% ….etc) Thanks, Alwyn
I've integrations made with UDP/TCP data inputs that index data correctly but after a while they stopped working. In Splunk we have different types of data inputs configured and only the UDP/TCP sto... See more...
I've integrations made with UDP/TCP data inputs that index data correctly but after a while they stopped working. In Splunk we have different types of data inputs configured and only the UDP/TCP stops working. When this happens, the following validations are performed: Validate iptables and firewall configurations on the server. Validate with tcpdump that the data arrives at the server. Validate that there is no data queuing by reviewing indexing queues. After different tests, data ingestion recovers specifying the parameter disabled=0 in inputs.conf and restarting Splunk. We didn't reach anything conclusive about what could cause this problem. We would like to be clear about what causes this problem to know how to act if the situation repeats itself. Do you know what could cause this problem? Could you guide me or share ideas of what I could investigate?
Hi Team   How to check the indexer status details  for last one month from the Search head by using SPL query
this is my query  earliest=-15m latest=now index=** host="*" LOG_LEVEL=ERROR OR LOG_LEVEL=FATAL OR logLevel=ERROR OR level=error | rex field=MESSAGE "(?<message>.{35})" | search NOT [ search e... See more...
this is my query  earliest=-15m latest=now index=** host="*" LOG_LEVEL=ERROR OR LOG_LEVEL=FATAL OR logLevel=ERROR OR level=error | rex field=MESSAGE "(?<message>.{35})" | search NOT [ search earliest=-3d@d latest=-d@d index=wiweb host="*" LOG_LEVEL=ERROR OR LOG_LEVEL=FATAL OR logLevel=ERROR OR level=error | rex field=MESSAGE "(?<message>.{35})" | dedup message | fields message ] | stats count by message appname | search count>50 | sort appname , -count ALmost all the recurring 'message' is getting ignored but few of them still come in the result even if those are there in last 2 days (which should have been ignored which is what subsearch is doing) is there anything else i can do to run this query with 100% success?
We have upgraded the AppD JAVA agent to version 22.3.0.33637 on SAP JAVA System. After updating javaagent, when we try to start SAP JAVA system, we have seen a couple of issues as below: 1.  SAP sy... See more...
We have upgraded the AppD JAVA agent to version 22.3.0.33637 on SAP JAVA System. After updating javaagent, when we try to start SAP JAVA system, we have seen a couple of issues as below: 1.  SAP system is not getting started. 2. When the system gets started somehow, our SAP NWA URL  is not reachable at all. However, after removing the parameter "-javaagent:/usr/sap/<SID>/appdyanmics/javaagent.jar" from SAP Configtool, followed by a full system restart, we are then able to access the NWA URL. But after doing so, no data gets populated in AppD Dashboard.
Below is my splunk raw event data { "additional": { "method": "POST", "url": "/api/resource/getContentEditorData", "headers": { "cloudfront-viewer-country": "US", "origin": "https://www.site... See more...
Below is my splunk raw event data { "additional": { "method": "POST", "url": "/api/resource/getContentEditorData", "headers": { "cloudfront-viewer-country": "US", "origin": "https://www.site1.com", "sec-ch-ua-platform": "\"Android\"", } }, "level": "notice", "message": "INCOMING REQUEST: POST /api/resource/getContentEditorData" } I need count of cloudfront-viewer-country and sec-ch-ua-platform for each Origin Please help. Expected Result: Origin Platform Platform Count Country Country Count https://www.site1.com Android 10 US 22   macOS 12 UK 3   Windows 6 AU 1 https://www.site2.com Android 4 US 8   macOS 4 UK 1   Windows 2 AU 1          
I am wanting to use a lookup file to drive search for an alert.  This seems a bit unique as I am not wanting to use event data from results to drive the lookup, but rather have all the lookup entries... See more...
I am wanting to use a lookup file to drive search for an alert.  This seems a bit unique as I am not wanting to use event data from results to drive the lookup, but rather have all the lookup entries dynamically added to the search itself. Below is the example use-case: CSV file example: Index, ErrorKey "index1","Error string 1" "index1","Error string 2" "index2","Error string 3" Looking to use it to scale a search like this: index=index1 OR index=index 2 ("Error string 1" OR "Error string 2" OR "Error string 3") Basically the index/error string combo could be managed in the csv file as opposed to the alert search itself. Making it easier to add/scale/maintain the search criteria. Is this possible?
Hi, I'm able to get the response in a tabular format using the command: table clientName, apiMethod, sourceSystem, httpStatus, version, timeTaken What I want is to do some aggregation on t... See more...
Hi, I'm able to get the response in a tabular format using the command: table clientName, apiMethod, sourceSystem, httpStatus, version, timeTaken What I want is to do some aggregation on them and get the result like: Basically, group by clientName, apiMethod, sourceSystem, httpStatus, and version to get the total calls and the average time. The below command is clearly misleading: stats count(clientName) as TotalCalls, avg(timeTaken) as avgTimeTakenS by clientName, apiMethod, sourceSystem, httpStatus, version Please help. Thanks, Arun
Hi at all, I tried to use the Alert Manager App on Splunk Cloud (it should be certified on Splunk Cloud) and it runs for the most features, but I’m receiving the following error: “A custom JavaScri... See more...
Hi at all, I tried to use the Alert Manager App on Splunk Cloud (it should be certified on Splunk Cloud) and it runs for the most features, but I’m receiving the following error: “A custom JavaScript error caused an issue loading your dashboard. See the Developer Console for more details” in many dashboards as: Reports – Stats Transictions Reports – Incident Export Settings – Incident Settings Settings -- Alert Status Settings – Drilldown Actions Settings – External Workflow Actions Settings -- User Settings Settings – eMail Templates In addition, what’s the Developer Console? In addition I haven’t any Alert result details in the Incident Posture. Are there know issues on Splunk Cloud? On Splunk Cloud I cannot access CLI, so how can I debug JS errors? Than you for your attention. Ciao. Giuseppe
Hi All, We have a universal forwarder running on Windows Server which is sending data to our Splunk Instance in Cloud. Below are some details of .conf files and logs: inputs.conf [default] host ... See more...
Hi All, We have a universal forwarder running on Windows Server which is sending data to our Splunk Instance in Cloud. Below are some details of .conf files and logs: inputs.conf [default] host = DB_DATA [monitor://D:\ABC\DB_Monitoring\Cust] disabled=0 index=rjsql sourcetype = csv crcSalt = <SOURCE> time_before_close = 60 props.conf [default] NO_BINARY_CHECK=true CHARSET=AUTO [source::D:\ABC\DB_Monitoring\Cust\*.csv] CHECK_METHOD = modtime There are some files which are either 1) no being indexed at all 2) only headers are indexed - this doesn't happen with all the files, only some of them. Logs from _internal (for file which has got only header indexed) Tailing processer file status btool output: Can you please suggest what else I could check here and resolve this intermittent issue? Thank you.
Hello. I am trying to use a drop down selector (as opposed to the time selector) in my dashboard to create a token with an @d value in %e-%b-%Y %T.%L. In other words, I want to provide users an opt... See more...
Hello. I am trying to use a drop down selector (as opposed to the time selector) in my dashboard to create a token with an @d value in %e-%b-%Y %T.%L. In other words, I want to provide users an option for "Today, Yesterday, This Week" and have that populate tokens with earliest (@d) & latest (@d+86399) value of that day. Ex: Input: User selects TODAY Token.start:  16-jun-2022 00:00:00.000 Token.end: 16-jun-2022 23:59:59.999 This should be straight forward, but for some reason, I am racking my brain trying to get this.
How do you tell how much you can ingest per day? I mean there needs to be a way to check to see what you're licensed for. Can't find this anywhere on the forums.   Also why isn't "Licensing" or "L... See more...
How do you tell how much you can ingest per day? I mean there needs to be a way to check to see what you're licensed for. Can't find this anywhere on the forums.   Also why isn't "Licensing" or "License" a label category?
Hi, I'm trying to remove blanks in a field when adding a csv file.  In heavy-forwarder I have tried to use a regex in props.conf and transforms.conf but the data continues to be entered with the bl... See more...
Hi, I'm trying to remove blanks in a field when adding a csv file.  In heavy-forwarder I have tried to use a regex in props.conf and transforms.conf but the data continues to be entered with the blank spaces in the fields. props.conf:     [blacklist] CHARSET=UTF-8 DATETIME_CONFIG=CURRENT INDEXED_EXTRACTIONS = csv KV_MODE = auto KV_TRIM_SPACES = true SEDCMD-blacklist = s/(^|\s)($|\s)//g TRANSFORMS-blacklist = blacklist_name NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Custom description = sourcetype para incorporar los datos de listas negras al índice disabled = false pulldown_type = true FIELD_DELIMITER = ;     Transforms.conf:     [blacklist_name] SOURCE_KEY = field:mltf_blacklist_name REGEX = ^\s*(\w+)(.*)\s FORMAT = $1 WRITE_META = true     I've been going through the documentation and I'm a bit lost with the splunk configurations. Any help will be appreciated. Thanks.
source="http:Emerson_P1CDN" | spath host | spath client_ip | spath status_code | where status_code=200 | spath referer | where referer="" | spath path | search path NOT ("*wcsextendedsearch" OR ... See more...
source="http:Emerson_P1CDN" | spath host | spath client_ip | spath status_code | where status_code=200 | spath referer | where referer="" | spath path | search path NOT ("*wcsextendedsearch" OR "*EmersonSKUListingView" OR "*EmersonProductListingView" OR "*CartRefreshStatusJSON" OR "*PriceAjaxView" OR "*AjaxSerialNumber" OR "*UnsupportedBrowserErrorView" OR "*LogonForm"OR "*MiniCart" OR "*MiniShopCartDisplayView" OR "*AnalyticsPageView" OR "*AjaxAccountLinkDisplay" OR "*.css" OR "*.js" OR "*.woff2" OR "*.woff" OR "*.gif" OR "*.png" OR "*.jpg" OR "*.ico" OR "*.pdf" OR "*.html" OR "*.txt" OR "*.xml" OR "*/ClickInfo" OR "*thumb") | bin _time span=1m | stats count by _time,host,path,client_ip | where count >= 100 | sort - count Does the query at the top is correct?, because we want to count the total events of _time,host,path and client_ip per minute
Current one that is working is: [fschange:F:\bau\box\quest] Need to specify it to: [fschange:F:\bau\box\quest\...\arch] Where quest has 5 folders, inside of each has a folder \arch But see... See more...
Current one that is working is: [fschange:F:\bau\box\quest] Need to specify it to: [fschange:F:\bau\box\quest\...\arch] Where quest has 5 folders, inside of each has a folder \arch But seems not working using \ ... \ or \*\ It is a forwarder and already restarted it as well.
Hello I'm running this query:   | union [ search host="puppet-01" OR host="jenkins-01" OR host="ANSIBLE-01" sourcetype=ProductionDeploy NOT Permisson_Job_Name=*_permission Environment=PRO... See more...
Hello I'm running this query:   | union [ search host="puppet-01" OR host="jenkins-01" OR host="ANSIBLE-01" sourcetype=ProductionDeploy NOT Permisson_Job_Name=*_permission Environment=PRODUCTION | table _time, App_Name, User, Change_Log_Description, Environment, Version] [ search sourcetype=mscs:storage:blob:json | rex field=_raw "Details\":\"(?<Details>.*?)\"," | rex field=_raw "ProjectName\":\"(?<ProjectName>.*?)\"," | rex field=_raw "ScopeDisplayName\":\"(?<ScopeDisplayName>.*?)\"," | rex field=_raw "releaseName\":\"(?<releaseName>.*?)\"}" | rex field=_raw "ActionId\":\"(?<ActionId>Release.ReleaseCreated)\"," | rex field=_raw "ActorUPN\":\"(?<ActorUPN>.*?)\"," | rex field=_raw "DeploymentResult\":\"(?<DeploymentResult>.*?)\"," | rex field=_raw "PipelineName\":\"(?<PipelineName>.*?)\"," | where releaseName != null AND PipelineName like "%Production" | rename ProjectName AS App_Name | rename ActorUPN AS User | rename releaseName AS Change_Log_Description | rename PipelineName AS Environment | rename DeploymentResult AS status | table _time, App_Name, User, Change_Log_Description, Environment, Version,status] | sort -_time asc   and im trying to get the status at the first search i don't have this value but i do have it at the second one i don't see status column at my results. can someone explain me why ? thanks
Hi Team, I am looking for how to pull license usage report for User Seat, Per Uptime and Per API Runs license metric either from CLI command or from Dashboard. Products for which I need to check li... See more...
Hi Team, I am looking for how to pull license usage report for User Seat, Per Uptime and Per API Runs license metric either from CLI command or from Dashboard. Products for which I need to check license usage reports: Splunk Threat Research Splunk Mission Control Splunk SOAR (Cloud)  Splunk On-Call Splunk User Behavior Analytics Splunk Real User Monitoring Splunk Synthetic Monitoring I am unable to find any documentation or commands to fetch reports for the products license usage. Can anyone please help me with this. Thanks, Avinash Kumar
Hello,  I have that limit of license indexation per day. So i wanted to limit data to be indexed from a specific Equipment. I received a great amount of logs from a source equipement using syslog (... See more...
Hello,  I have that limit of license indexation per day. So i wanted to limit data to be indexed from a specific Equipment. I received a great amount of logs from a source equipement using syslog (i can't change which types of logs to be sent to splunk). So, to limit the amount of data being indexed. i filtered data in the indexation phase using splunk. I added a regex in splunk so that splunk only indexes the wanted types of logs and ignore other received sylog logs from that specific equipment. I did this using TRANSFORMS-set in props.conf and using the regex expression in transforms.conf file.  As a result, i had the following errors in splunk health that i couldn't fix:  Ingestion Latency Events from tracker.log have not been seen for the last 2940 seconds, which is more than the red threshold (210 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked.  TailReader-0       The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data. Whenever i remove the regex expression the problem is solved => meaning that the regex is the only source of this problem/error.   Thank you in advance for help.
Can we download Splunk Unser Behavior Analytics? the ".OVA" file with a trial version or free??? 
Hi All, I have logs like below in splunk. log1: "count":1, log2: gcg.gom.esb_159515.rg.APIMediation.Disp1.3.Rs.APIM3 log3: "count":1, log4: gcg.gom.esb_159515.rg.APIMediation.Disp1.3.Rs.API... See more...
Hi All, I have logs like below in splunk. log1: "count":1, log2: gcg.gom.esb_159515.rg.APIMediation.Disp1.3.Rs.APIM3 log3: "count":1, log4: gcg.gom.esb_159515.rg.APIMediation.Disp1.3.Rs.APIM2 log5: "count":1, log6: gcg.gom.esb_159515.rg.APIMediation.Disp1.3.Rs.APIM1 I used the below query to create a table showing the "Queue" and the "Consumer count": ***** | rex field=_raw "Rs\.(?P<Queue>\w+)" | rex field=_raw "count\"\:(?P<Consumer_Count>\d+)\," | table Queue,Consumer_Count But this query gives the table in the below manner: Queue Consumer_Count   1 APIM3     1 APIM2     1 APIM1   I want the rows to be combined in the below manner: Queue Consumer_Count APIM3 1 APIM2 1 APIM1 1   Please help to modify the query to get the desired output. Thank you..!!