All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I would like to ask on why does my Eventgen only Generate data Once. It follows all the setting, but it ingest once. Please see eventgen.conf [juniper_sample] disabled = false la... See more...
Hi All, I would like to ask on why does my Eventgen only Generate data Once. It follows all the setting, but it ingest once. Please see eventgen.conf [juniper_sample] disabled = false latest = now interval = 60 count = -1 randomizeCount = 0.2 outputMode = splunkstream index = firewall_juniper sourcetype = juniper:junos:firewall
hello everyone: I have create db connect inputs, It reads record from the database every five minutes to the Splunk index. but I found that there was a 30 minute difference between index time an... See more...
hello everyone: I have create db connect inputs, It reads record from the database every five minutes to the Splunk index. but I found that there was a 30 minute difference between index time and event time. as follows: index = test |eval indextime=strftime(_indextime,"%Y/%m/%d %H:%M:%S") |eval age=(_indextime - _time)/60 |table indextime _time age -------------------------------------------------------------------------------------------------- indextime _time age 2020/02/27 11:40:00 2020/02/27 11:11:14 28.76667 2020/02/27 10: 30:00 2020/02/27 09: 59:36 30.40000 2020/02/27 10:25: 00 2020/02/27 09: 56: 48 28.20000 now, I want to create an alert to query important events , I hope this alert to run every 10 minutes, so how to set the time range in alert setting correctly, prevent missing important events or repeating alert? time range: ???? How to set up correctly cron expresion : */10 * * * *
Hi All.. I need help with table pagination by default splunk provides pagination option as << prev & next >> instead of that can we have start & END, where START will show first page in table with... See more...
Hi All.. I need help with table pagination by default splunk provides pagination option as << prev & next >> instead of that can we have start & END, where START will show first page in table with more than one page & END will show last page on click. Thanks, Vishal
Hi All I have an AD Account how can i know what modifications has been done in last one month on this account from splunk and who has modified. i want to export this information to csv file.
Miraculously in 2020 there still hasn't been a Splunk Answer that details an elegant way to convert from float to currency. Is this the best possible solution, or are there more elegant ways to do... See more...
Miraculously in 2020 there still hasn't been a Splunk Answer that details an elegant way to convert from float to currency. Is this the best possible solution, or are there more elegant ways to do this? Careful of negative numbers. eval budget_amount="$".tostring(budget_amount,"commas") | rex mode=sed field=budget_amount "s/\$$-/-$$/g"
Hi, I'm working on instrumenting Java Lambda. My environment contains 2 lambdas. the first one is triggering the second one through SNS. Also, implemented the sns exit call from the first lambda. ... See more...
Hi, I'm working on instrumenting Java Lambda. My environment contains 2 lambdas. the first one is triggering the second one through SNS. Also, implemented the sns exit call from the first lambda. Now, I'm trying to correlate the business transaction to be the same as started from the first lambda and passing to the second one. I'm using Manual Instrumentation and followed AppDynamics documentation: https://docs.appdynamics.com/display/PRO45/Manual+Tracer+Instrumentation This is a snippet from my code on how I'm fetching the correlation header from an input: public Context handleRequest(InputStream input, Context context) { if (runAppDynamics) { //Instantiate the tracer Tracer tracer = AppDynamics.getTracer(context); //Automatically Parse the correlation header InputStream inputStream = InputStreamConverter.convertToMarkSupportedInputStream(input); try { String inputAsString = IOUtils.toString(inputStream, Charset.forName("UTF-8")); com.appdynamics.serverless.tracers.dependencies.com.google.gson.JsonParser parser = new JsonParser(); JsonObject inputObject = parser.parse(inputAsString).getAsJsonObject(); if (inputObject.has(Tracer.APPDYNAMICS_TRANSACTION_CORRELATION_HEADER_KEY)) { System.out.println("corrHeader in tracer"); correlationHeader = inputObject.get(Tracer.APPDYNAMICS_TRANSACTION_CORRELATION_HEADER_KEY).getAsString(); } else { //Try reading from HTTP headers if (inputObject.has("headers")) { System.out.println("corrHeader in headers"); JsonObject httpHeaders = inputObject.getAsJsonObject("headers"); if (httpHeaders.has(Tracer.APPDYNAMICS_TRANSACTION_CORRELATION_HEADER_KEY)) { correlationHeader = httpHeaders.get(Tracer.APPDYNAMICS_TRANSACTION_CORRELATION_HEADER_KEY).getAsString(); } } } } catch (IOException e){} if(correlationHeader == null) { correlationHeader = "corrheadertest"; } //Create Transaction transaction = tracer.createTransaction(correlationHeader); //Start the transaction monitoring. transaction.start();  This is the Lambda Test input: { "key": "Content-Type", "singularityheader": "com.appdynamics.serverless.tracers.aws.correlation.CorrelationHeader" } Every time I test my Lambda it didn't correlate and cloud watch logs return: [AppDynamics Tracer] [DEBUG]: Creating transaction for header [com.appdynamics.serverless.tracers.aws.correlation.CorrelationHeader] [AppDynamics Tracer] [DEBUG]: Processing continuing transaction => com.appdynamics.serverless.tracers.aws.correlation.CorrelationHeader [AppDynamics Tracer] [DEBUG]: Caller chains read => [] [AppDynamics Tracer] [DEBUG]: Not correlating transaction due to potentially malformed header [com.appdynamics.serverless.tracers.aws.correlation.CorrelationHeader]   I tried to change the value many times to represent valid correlation header, but it gives me the same message for every value I put.   Any idea what I'm missing here and what should be a valid correlation header?   Thanks
I am trying to feed the results of (2) subsearches into and eval search. | eval Average=data/asstes [stats sum(data) | return $data] [stats count(MAC_Address) | retun $assets] there may bay ... See more...
I am trying to feed the results of (2) subsearches into and eval search. | eval Average=data/asstes [stats sum(data) | return $data] [stats count(MAC_Address) | retun $assets] there may bay a better way to do this... I need to sum of data divded by to total number of unique MAC addresses. Any help is appreciated.
Hey Folks, I was about to start Splunking for this particular AWS credential compromise scenario - netflixtechblog.com/netflix-cloud-security-detecting-credential-compromise-in-aws-9493d6fd373a ... See more...
Hey Folks, I was about to start Splunking for this particular AWS credential compromise scenario - netflixtechblog.com/netflix-cloud-security-detecting-credential-compromise-in-aws-9493d6fd373a Just wondering, before I start if anyone else out there has done it and if they have anything they can share?! Thanks!
Hi, We are using Splunk Cloud Instance. My launcher app sometimes disappears in the app context on heavy forwarder. In Data Inputs while configuring the input settings, on trying to select the ap... See more...
Hi, We are using Splunk Cloud Instance. My launcher app sometimes disappears in the app context on heavy forwarder. In Data Inputs while configuring the input settings, on trying to select the app 'launcher' is not appearing in the dropdown. I just want to understand, where does it fetches all these apps from. I am assuming it is from the deployment server. My launcher app contains configuration for that type of data that I am trying to ingest, so to consolidate the same type of configuration, I am reluctant to use any other app. However, I can't see launcher app in the drop downs. Not sure, why it is happening, so i want to get the holistic view of it, how it works and where is it getting the apps from and why does it disappears?
I need to do a search on multiple indexes/events and need to do a join on different fields from both. Below query works but is really slow when there are large number of results. Looking for an alter... See more...
I need to do a search on multiple indexes/events and need to do a join on different fields from both. Below query works but is really slow when there are large number of results. Looking for an alternative solution that can help improve the performance. Thanks! index=x 2202 | spath "EventStreamData.requestContext.id" | spath "EventStreamData.httpStatus" | rename EventStreamData.httpStatus as "STATUS" | rename EventStreamData.requestContext.id as "transaction_number" | fields transaction_number, STATUS |join transaction_number [|search index=y 2203 | spath "EventStreamData.requestContext.allRequestHeaders.client{}" | search "EventStreamData.requestContext.allRequestHeaders.client{}"=77777 | spath "EventStreamData.response.transactionNumber" | rename EventStreamData.response.transactionNumber as "transaction_number" | fields transaction_number] | stats count AS "COUNT" by STATUS | eval STATUS_MESSAGE= case(((STATUS==200) OR (STATUS==201)), "Success", ((STATUS==500) OR (STATUS==502) OR (STATUS==504) OR (STATUS==404)), "Server Error",((STATUS==400) OR (STATUS==401) OR (STATUS==403) OR (STATUS==409)), "Client Error") | table STATUS_MESSAGE, STATUS, COUNT | addcoltotals COUNT
Hi all, I am receiving this error when trying to configure my ServiceNow developer instance that I created on my Splunk server. My servicenow instance is dev and I'm not sure what I'm missing.... See more...
Hi all, I am receiving this error when trying to configure my ServiceNow developer instance that I created on my Splunk server. My servicenow instance is dev and I'm not sure what I'm missing. The username and password are correct for my dev instance to SNOW. Just not sure what the network issue is. Please assist thanks in advance,
I am trying to mask a password that is inside a log coming from HTTP Event Collector. I configure my props.conf with the following [api-core] TRANSFORMS-anonymize = password-anonymizer and... See more...
I am trying to mask a password that is inside a log coming from HTTP Event Collector. I configure my props.conf with the following [api-core] TRANSFORMS-anonymize = password-anonymizer and my transforms.conf like this [password-anonymizer] REGEX = FORMAT = xxxxx DEST_KEY = _raw I want to mask the password that is inside this log but I can't find the way to make the regular expression for this. {"api_id":"5e4d6034e4b0258f388e1dfe","app_type":"PRODUCTION","bytes_received":57,"response_body":"","client_id":"4b7eff29-39ca-4728-ba28-b8889308600d","billing":{"amount":0,"provider":"none","currency":"USD","model":"free","trial_period_days":0},"datetime":"2020-02-19T16:29:23.535Z","time_to_serve_request":23,"uri_path":"/public/human-resource/v1.0/users/password-reset","log_policy":"payload","endpoint_url":"N/A","product_id":"__INTERNAL_QS__","host":"127.0.0.1","client_ip":"10.181.37.19","app_id":"__INTERNAL_QS__:1.0.0:default","client_geoip":{},"request_protocol":"https","developer_org_id":"5ddfc086e4b0740304d6c3e0","transaction_id":"66306","immediate_client_ip":"10.181.37.19","product_name":"__INTERNAL_QS__","plan_name":"default","product_title":"","tags":["_geoip_lookup_failure"],"catalog_id":"5ddfc67ce4b0740304d6c427","space_name":[""],"api_name":"Authentication","org_id":"5ddfc086e4b0740304d6c3e0","plan_version":"1.0.0","status_code":"400 Bad Request","request_method":"PUT","developer_org_name":"public","http_user_agent":"Dalvik/2.1.0 (Linux; U; Android 7.0; Moto G (4) Build/NPJS25.93-14-8.1-9)","resource_path":"put","@version":"1","response_http_headers":[{"Server":"Microsoft-IIS/10.0"},{"transaction_id":"36370bf9-9239-4e8b-bc41-aae8ffc431c5"},{"timestamp":"2020-02-19T16:29:23Z"},{"channel-id":""},{"application":""},{"Itau-Client-Secret":""},{"Itau-Client-Id":""},{"X-Powered-By":"ASP.NET"},{"Date":"Wed, 19 Feb 2020 16:29:23 GMT"},{"X-Global-Transaction-ID":"ce91764b5e4d626300010302"},{"Access-Control-Expose-Headers":"APIm-Debug-Trans-Id, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset, X-Global-Transaction-ID"},{"Access-Control-Allow-Origin":"*"},{"Access-Control-Allow-Methods":"PUT"}],"org_name":"public","latency_info":[{"task":"Start","started":0},{"task":"security-appID","started":7},{"task":"invoke","started":9}],"headers":{"http__ws_haprt_wlmversion":"-1","http_via":"1.1 AwAAAKsfL+8-","http_version":"HTTP/1.1","http_connection":"Keep-Alive","request_method":"POST","http_host":"localhost:9700","request_uri":"/_bulk","http_x_forwarded_server":"apimngdes.itauchile.cl","content_type":"text/plain","http_x_global_transaction_id":"ce91764b5e56ca470154b0f1","http_x_forwarded_host":"10.181.168.56:9443","http_x_forwarded_for":"10.181.168.63","request_path":"/_bulk","http_organization":"admin","http_x_client_ip":"127.0.0.1","content_length":"211346"},"catalog_name":"human-resource","product_version":"1.0.0","debug":[],"rateLimit":{"rate-limit":{"limit":"-1","count":"-1"},"rate-limit-1":{"limit":"-1","count":"-1"},"rate-limit-2":{"limit":"-1","count":"-1"},"per-minute":{"limit":"-1","count":"-1"}},"api_version":"v1","bytes_sent":0,"app_name":"__INTERNAL_QS__","gateway_geoip":{},"@timestamp":"2020-02-26T19:43:03.957Z","request_body":"{ \"password_new\":\"qwe123\", \"password_new_confirm\":\"qwe123\" }","request_http_headers":[{"Content-Type":"application/json"},{"Accept":"application/json"},{"charset":"utf-8"},{"authorization":"********sanitized********"},{"Itau-Client-Secret":"kO1yD5bJ2bX8dS8eR3pQ7mQ6cM0uO0aV6mX7dG5oP6xD4kD5uD"},{"Itau-Client-Id":"4b7eff29-39ca-4728-ba28-b8889308600d"},{"User-Agent":"Dalvik/2.1.0 (Linux; U; Android 7.0; Moto G (4) Build/NPJS25.93-14-8.1-9)"},{"Host":"clstgappd01v5.itauchile.cl"},{"Accept-Encoding":"gzip"},{"Content-Length":"57"},{"Via":"1.1 AQAAAKCPm9Q-"},{"X-Client-IP":"10.181.37.19"},{"X-Global-Transaction-ID":"ce91764b5e4d626300010302"}],"resource_id":"Authentication:v1:put:/v1.0/users/password-reset","gateway_ip":"10.181.168.63","space_id":[""],"plan_id":"__INTERNAL_QS__:1.0.0:default","developer_org_title":"undefined","query_string":[]} Show syntax highlighted host = 10.181.167.158:8088 host = 127.0.0.1request_body = { "password_new":"qwe123", "password_new_confirm":"qwe123" }source = http:api_connect_tokensourcetype = api-coreuri_path = /public/human-resource/v1.0/users/password-reset I want to mask the password_new":"qwe123 to be password: xxxxxx ` Please your help with this
Hi Splunkers Can I use one single splunk instance as both the Deployer and Deployment server? If yes what are the pros and cons?
Hello Splunkers, I want to know if we can limit the RAM, CPU and Disk utilization of a server where I have installed the Universal Forwarder. Currently, based on my research I understand that th... See more...
Hello Splunkers, I want to know if we can limit the RAM, CPU and Disk utilization of a server where I have installed the Universal Forwarder. Currently, based on my research I understand that the Bandwidth limit can be controlled using limits.conf. Background: As I am installing the UF on my production server (*nix, Windows) I wouldn't want to choke the resource utilization on the server cuz of UF on it instead, I would like to limit the RAM and CPU utilization to 2-5% and Disk to 10-15%. Thanks in advance.
I am running Splunk Enterprise 8.0.1 monitoring files with a universal forwarder and putting info from csv files into a metric index using logs to metrics through props.conf and transforms.conf. Mos... See more...
I am running Splunk Enterprise 8.0.1 monitoring files with a universal forwarder and putting info from csv files into a metric index using logs to metrics through props.conf and transforms.conf. Most of the monitored files are working as expected but one is not showing up in the metric index and I cannot find any errors about it in splunkd.log or metrics.log. This file occasionally has the last entry empty. Here is an example of the data: one,1,100 two,2,200 three,3, four,4,400 The value in column three is set to be a dimension for the metric data. It is actually a key variable and I need to be able to track it. Does anyone know if the missing value is causing the data not to import into my metric index and if so, how to fix it? Thanks.
Hi guys. Can you confirm Forwarder will never "merge" theese different inputs, holding same path? addon: etc/apps/addon1/default/inputs.conf [monitor:///tmp/] whitelist=.*\.log$|.*\.txt$ index=... See more...
Hi guys. Can you confirm Forwarder will never "merge" theese different inputs, holding same path? addon: etc/apps/addon1/default/inputs.conf [monitor:///tmp/] whitelist=.*\.log$|.*\.txt$ index=blabla sourcetype=blabla addon: etc/apps/addon2/default/inputs.conf [monitor:///tmp/] whitelist=.*\.json$|.*\.dat$ index=blabla sourcetype=blabla ... the first inputs from addon1 will be taken in consideration, while the second from addon2 will be rejected (conflict), without merging the whitelist for same original path "conflict"... so i absolutely need to take only 1 addon, holding all? addon: etc/apps/singleaddon/default/inputs.conf [monitor:///tmp/] whitelist=.*\.log$|.*\.txt$|.*\.json$|.*\.dat$ index=blabla sourcetype=blabla The singleaddon works, obviously.
For testing installed splunk 8.02 on my lappy window10 and on top of it installed splunk addon for service. I successfully opened the configuration tab inside the app and linked my snow account bu... See more...
For testing installed splunk 8.02 on my lappy window10 and on top of it installed splunk addon for service. I successfully opened the configuration tab inside the app and linked my snow account but when i am clicking on the inputs tab inside the snow app it shows loading and never load. Please help.
Hi, I have below multiselect filter , based on username="ABC" , I need to display two more filters.( ip, city) And when those two input multiselect values should also reflect on our all panel , ... See more...
Hi, I have below multiselect filter , based on username="ABC" , I need to display two more filters.( ip, city) And when those two input multiselect values should also reflect on our all panel , else it should not get search <input id="selid"> <search > <query>search user IN ($seluser$) | table id | dedup id</query> </search> <delimiter>, </delimiter> <default>*</default> <change> <condition value="ABC"> <set token="set_tok"></set> <set token="set_info"> ip IN ($selip$) city IN ($selcity$)</set> </condition> <condition> <unset token="set_tok"></unset> <set token="set_info"></set> </condition> </change></input> Base query: index........ | search name IN ($selname$) user IN ($seluser$) id IN($selid$) $set_info$ Now , I want to show below as in panel When I select user=ABC index ... | search name IN ($selname$) user IN ($seluser$) id IN($selid$) ip IN ($selip$) city IN ($selcity$) else for other user index ... | search name IN ($selname$) user IN ($seluser$) id IN($selid$) I am getting problem , when I am trying to change the value on any of those two filter (ip, city) , its only taking the initial value , when I changed to anything else no effect on panels, Please suggest , what I am doing wrong here.
Hi, I have a requirement where I need to add multiple pools for same license stack? Does editing size of the pool cause any loss of data on indexers? Does deleting any existing pool cause any los... See more...
Hi, I have a requirement where I need to add multiple pools for same license stack? Does editing size of the pool cause any loss of data on indexers? Does deleting any existing pool cause any loss of data on indexers? I received below message when I was trying to delete one of the volume. "Once the pool is deleted, all of the indexers in the pool will lose their shared indexing volumes. Do you still want to delete this pool?
Our splunk administrator jumped ship shortly after getting splunk set up, and there wasn't anybody that officially took over his role, thus, no real splunk experts here. I'm trying to fill in some ga... See more...
Our splunk administrator jumped ship shortly after getting splunk set up, and there wasn't anybody that officially took over his role, thus, no real splunk experts here. I'm trying to fill in some gaps. We recently decommissioned a couple of domain controllers and replaced them with new ones, and then some reports stopped sending events related to accounts getting locked out, password resets, and users that weren't logging out at night. (there's probably more but these are the three that management is specifically complaining about) I went into "Settings | Data Inputs | Active Directory monitoring" and added the names of the two new domain controllers - it was just running with "default" before. That didn't seem to fix the issue. Where else in splunk would any pointers to domain controllers be configured that might affect the splunk server not getting these events? This is splunk version 6.6.4, running on Windows. (hopefully this question isn't too vague to work with )