All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

So the site we use to get 6 month developer license has been down for a few days now. After accepting the T&C's (https://dev.splunk.com/enterprise/dev_license/devlicenseagreement/) and waiting for 30... See more...
So the site we use to get 6 month developer license has been down for a few days now. After accepting the T&C's (https://dev.splunk.com/enterprise/dev_license/devlicenseagreement/) and waiting for 30+ seconds i end up on this site https://dev.splunk.com/enterprise/dev_license/error that has this error "$Request failed with the following error code: 500" I've already sent this to devinfo@splunk.com but gotten no response.  Anyone else hitting this issue?
2. It appears the rows from which my_count is taken are always those without a _time value resulting from the eval in my query (because either `my_timestamp` did not match the strptime format, or tha... See more...
2. It appears the rows from which my_count is taken are always those without a _time value resulting from the eval in my query (because either `my_timestamp` did not match the strptime format, or that field was not present when the record was ingested into splunk -- my data has both cases) This is to say that you have bad data.  Bad data leads to bad results.  You need to find a way to fix your data, or at least fix how you extract from my_timestamp if you cannot fix this field in data.
Sorry, I assumed the dataset "botsV1" was very widely known. Thankyou for responding to my question however I have been able to solve the issue.
Hi @Raymond2T, In Simple XML dashboards, you can use the deprecated but still functional classField option. For example, if you have a field named range with values green and red, use:   <single> ... See more...
Hi @Raymond2T, In Simple XML dashboards, you can use the deprecated but still functional classField option. For example, if you have a field named range with values green and red, use:   <single> <search> <!-- ... --> </search> <option name="classField">range</option> <!-- ... --> </single>   When trellis is enabled, the resulting div element will have an additional class of either green or red, depending on the value of that row's range field:   <div id="singlevalue" class="single-value shared-singlevalue red" ...>   You can adjust your stylesheet to include the green and red classes as desired:   <style> @keyframes blink { 100%, 0% { opacity: 0.6; } 60% { opacity: 0.9; } } .single-value.red rect { animation: blink 0.8s infinite; } </style>    
@gillettepd  I've not used or written code (in C) for Domino since it was still a Lotus product. During IBM's tenure, JDBC Access for IBM Lotus Domino <https://www.openntf.org/main.nsf/project.xsp?r... See more...
@gillettepd  I've not used or written code (in C) for Domino since it was still a Lotus product. During IBM's tenure, JDBC Access for IBM Lotus Domino <https://www.openntf.org/main.nsf/project.xsp?r=project/JDBC%20Access%20for%20IBM%20Lotus%20Domino> may have been a viable option for querying LOG.NSF, DOMLOG.NSF, etc. using Splunk DB Connect. The JDBC solution may work with HCL Domino 11.x, but a quick search suggests it will not work with 12.x. The JDBC driver may also be incompatible with DB Connect, depending on its implementation of expected JDBC interfaces. That said, give it a try! I would evaluate OData access <https://opensource.hcltechsw.com/Domino-rest-api/tutorial/odata/index.html>; however, there is no OData add-on for Splunk. If you're comfortable with Python, REST API Modular Input <https://splunkbase.splunk.com/app/1546> is a (mostly) fee-based add-on that may simplify writing an OData wrapper. Splunk Add-on Builder <https://splunkbase.splunk.com/app/2962> is always an option, but it exposes the Splunk API in a way that may complicate your solution.
With a query like the following (I've simplified it a little here and renamed some fields) index="my-test-index" project="my-project" | eval _time = strptime(my_timestamp, "%Y-%m-%dT%H:%M:%S.%N+00:0... See more...
With a query like the following (I've simplified it a little here and renamed some fields) index="my-test-index" project="my-project" | eval _time = strptime(my_timestamp, "%Y-%m-%dT%H:%M:%S.%N+00:00") | stats latest(my_timestamp) latest(_time) latest(my_count) as my_count by project I see behaviour that surprised me: 1. If I repeatedly issue the query, the value of my_count varies 2. It appears the rows from which my_count is taken are always those without a _time value resulting from the eval in my query (because either `my_timestamp` did not match the strptime format, or that field was not present when the record was ingested into splunk -- my data has both cases) 3. In the output of the search, the value of my_timestamp returned does not always come from the same ingested record as my_count. 4. In fact, the value of my_timestamp in the search output is always taken from the same single record: it doesn't change when I repeatedly issue the query. I guess 1. and 2. are because "null" (or empty or some similar concept) _time values aren't really expected and happen to sort latest. I guess 3. is because function `latest` operates field-by-field, and is not selecting a whole row -- combined again with the fact that some _time values are null. 4. I don't understand, but perhaps is a coincidence and is not reliably true in general outside of my data set etc., I'm not sure. What I really want is to find the ingested record with the latest value of `my_timestamp` for a given `project`, so I can present fields like `my_count` by `project` in a "most recent counts" table. I don't really want to operate on individual fields' "latest" values as in the query above, but rather latest entire records. How can I best achieve that in splunk?
More words please. What do you want to do? What do you mean by "cisco servers" and "cisco console"? And what does it have to do with Splunk?
If I recall it worked when I sent test logs from my client app which is instrumented with the Faro Web SDK library.  I didn't go back to compare the log contents against the otlp spec log to figure o... See more...
If I recall it worked when I sent test logs from my client app which is instrumented with the Faro Web SDK library.  I didn't go back to compare the log contents against the otlp spec log to figure out the difference since it was working.  I think it was silently failing for some reason.
In one example, the brand field is terminated by a space rather than an ampersand so add \s to the regex. index = dd | rex field=_raw "brand=(?<brand>[^&\s]+)" | rex field=_raw "market=(?<market>[^... See more...
In one example, the brand field is terminated by a space rather than an ampersand so add \s to the regex. index = dd | rex field=_raw "brand=(?<brand>[^&\s]+)" | rex field=_raw "market=(?<market>[^&]+)" | rex field=_raw "cid=(?<cid>\d+)" | table brand, market, cid  
Hi, I have my messages like below msg: abc.com - [2023-11-24T18:38:26.541235976Z] "GET /products/?brand=ggg&market=ca&cid=5664&locale=en_CA&pageSize=300&ignoreInventory=false&includeMarketingFlags... See more...
Hi, I have my messages like below msg: abc.com - [2023-11-24T18:38:26.541235976Z] "GET /products/?brand=ggg&market=ca&cid=5664&locale=en_CA&pageSize=300&ignoreInventory=false&includeMarketingFlagsDetails=true&size=3%7C131%7C1%7C1914&trackingid=541820668241808 HTTP/1.1" 200 0 47936  "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36" "10.119.25.242:59364" "10.119.80.158:61038" x_forwarded_for:"108.172.104.40, 184.30.149.136, 10.119.155.154, 10.119.145.54,108.172.104.40,10.119.112.127, 10.119.25.242" x_forwarded_proto:"https" vcap_request_id:"faa6d72c-4518-4847-47b2-0b340bb27173" response_time:0.455132 gorouter_time:0.000153 app_id:"1ae5e787-31d1-4b6a-aa7a-1ff7daed2542" app_index:"41" instance_id:"5698b714-359f-4906-742e-2bd7" x_cf_routererror:"-" x_b3_traceid:"042db9308779903a607119a204239679" x_b3_spanid:"b6e3d71259e4c787" x_b3_parentspanid:"607119a204239679" b3:"1188a5551d8c70081e69521568459a30-1e69521568459a30" msg: abc.com - [2023-11-24T18:38:25.779609363Z] "GET /products/?brand=hhh&market=us&cid=1185233&locale=en_US&pageSize=300&ignoreInventory=false&includeMarketingFlagsDetails=true&department=136&trackingid=64354799847524800 HTTP/1.1" 200 0 349377 "Mozilla/5.0 (iPhone; CPU iPhone OS 16_6_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Mobile/15E148 Safari/604.1" "10.119.25.155:53702" "10.119.80.152:61026" x_forwarded_for:"174.203.39.239, 23.64.120.177, 10.119.155.137, 10.119.145.11,174.203.39.239,10.119.112.37, 10.119.25.155" x_forwarded_proto:"https" vcap_request_id:"d1628805-0307-4bf7-7d8d-b1fa3a829986" response_time:1.211096 gorouter_time:0.000257 app_id:"1ae5e787-31d1-4b6a-aa7a-1ff7daed2542" app_index:"180" instance_id:"8faf9328-b05d-4618-7d12-96e6" x_cf_routererror:"-" x_b3_traceid:"06880ee3e5ad85b36dd3f4e64337a842" x_b3_spanid:"acb1620e517eebec" x_b3_parentspanid:"6dd3f4e64337a842" b3:"06880ee3e5ad85b36dd3f4e64337a842-6dd3f4e64337a842" msg: abc.com - [2023-11-24T18:38:26.916331792Z] "GET /products/?cid=1127944&department=75&market=us&locale=en_US&pageNumber=1&pageSize=60&trackingid=6936C9BF-D9DD-4D77-A14F-099C0400345D&brand=lll HTTP/1.1" 200 0 48615 "-" "browse" "10.119.25.172:51116" "10.119.80.139:61034" x_forwarded_for:"10.119.80.195, 10.119.25.172" x_forwarded_proto:"https" vcap_request_id:"a3125da7-a602-4e17-6656-909f380c12ed" response_time:0.068075 gorouter_time:0.000737 app_id:"1ae5e787-31d1-4b6a-aa7a-1ff7daed2542" app_index:"156" instance_id:"4f44c63e-44c6-4605-7466-fe5d" x_cf_routererror:"-" x_b3_traceid:"731b434ec32bb0eb6236fd4a8b8e1195" x_b3_spanid:"6236fd4a8b8e1195" x_b3_parentspanid:"-" b3:"731b434ec32bb0eb6236fd4a8b8e1195-6236fd4a8b8e1195" Iam trying to extract values brand market and cid from above url with below query  index = dd | rex field=_raw "brand=(?<brand>[^&]+)" | rex field=_raw "market=(?<market>[^&]+)" | rex field=_raw "cid=(?<cid>\d+)" | table brand, market, cid but I get the whole url after brand= getting extracted, not just brand market and cid values. Please help
I has read about this little bit more and to be honest I couldn't found a clear answer and any reason why this has worked like that way.  App/user configuration files said that app.conf is for user/... See more...
I has read about this little bit more and to be honest I couldn't found a clear answer and any reason why this has worked like that way.  App/user configuration files said that app.conf is for user/app level only. You cannot use it for global configurations, BUT still the instructions said that you should put it into etc/system/local/app.conf to use it as a global (Set the deployer push mode). This is quite confusing! And actually that file is on deployer on path etc/shcluster/apps/<app> not in etc/apps/<app> which basically means that it's hasn't merged(/affected) with other app files when bundle has applied  (read: created) on deployer. Precedence has used only for files under etc/apps/<app> + etc/system if I have understood right. Usually when you have created your own app you set all configurations into default not a local directory. This should haven't have any other side effect, than where it has put, when bundle has applied into SHC members. Of course also e.g. pass4SymmKeys etc. is crypted (and plain text has removed) only on those files, which are in local! If you have some apps e.g. from splunkbase, then you should put your local changes under local directory, avoid to lost those, when you update that app to the newer version. But it shouldn't depend that way based on where app.conf is default vs. local. If this has some side effects then it should mentioned on docs. I haven't seen any mention that default vs local has used for setting global vs. local values. It's only the precedence which those are used. Definitely this needs some feedback to doc team. BTW: @pmerlin1 you said that you have migrated from SH to SHC. Have you followed the instructions and use only clean and new SH as a members on this SHC not reuse (without cleaning) the old SH?
Hello, I tried with the option. But no luck.  Not seeing any option called "Akamai​ Security Incident Event Manager API" under data inputs of settings.
Instead of looking in the app, please try clicking on Settings -> then click on Data Inputs and then look for Akamai​ Security Incident Event Manager API. Once you locate it, click on it and follow t... See more...
Instead of looking in the app, please try clicking on Settings -> then click on Data Inputs and then look for Akamai​ Security Incident Event Manager API. Once you locate it, click on it and follow the instructions mentioned on this page: https://techdocs.akamai.com/siem-integration/docs/siem-splunk-connector#install-the-splunk-connector
There is no UF add-on specific to ES.  ES can produce an add-on for your indexers, but that method can be used only in limited circumstances.  See https://docs.splunk.com/Documentation/ES/7.2.0/Insta... See more...
There is no UF add-on specific to ES.  ES can produce an add-on for your indexers, but that method can be used only in limited circumstances.  See https://docs.splunk.com/Documentation/ES/7.2.0/Install/InstallTechnologyAdd-ons#Deploy_add-ons_to_forwarders for when it can be used and alternatives for other environments.  I recommend manual installation of add-ons.
Again - there is no such thing as "add on for UF". There are several different add-ons (which you install on various components of your Splunk Infrastructure, including UFs) needed for specific solut... See more...
Again - there is no such thing as "add on for UF". There are several different add-ons (which you install on various components of your Splunk Infrastructure, including UFs) needed for specific solution you want to ingest data from. So if you want to process logs from Checkpoint firewalls, you use TA for Checkpoint. If you get logs from Proofpoint you install UF for Proofpoint. And so on.  
can u share the TA UF, specifically used for ES? Or the download link or any helpful screenshot
Hi Team, I have recently install the AppDynamics platform admin on Linux Server and successfully install the controller through GUI. But I am not able to install Event Service. (Note :- I have Two ... See more...
Hi Team, I have recently install the AppDynamics platform admin on Linux Server and successfully install the controller through GUI. But I am not able to install Event Service. (Note :- I have Two Linux Server, One for Platform Admin & Controller. The Second Server for Event Service.) I have successfully add the Event Service Server Host in Hosts Tab through OPENSSH between Two Servers.While Installing Event Service I got the connection timeout error , unable to ping. So i tried changing the property values in your events-services-api-store.properties file to IP address instead of hostnames. Then Add the following environment variable to the new user. export INSTALL_BOOTSTRAP_MASTER_ES8=true After that I Restart the Event-Service(ES) manually using below command from event-service/processor directory. bin/events-service.sh stop -f && rm -r events-service-api-store.id && rm -r elasticsearch.id nohup bin/events-service.sh start -p conf/events-service-api-store.properties & After following above steps, I get below error in Enterprise Console, while starting the Event Service Please help me resolved this issue....
If it's text then Splunk can ingest it.  How to ingest it is another matter. There are a few ways to onboard data into Splunk. Install a universal forwarder on the server to send log files to Splu... See more...
If it's text then Splunk can ingest it.  How to ingest it is another matter. There are a few ways to onboard data into Splunk. Install a universal forwarder on the server to send log files to Splunk Have the server send syslog data to Splunk via a syslog server or Splunk Connect for Syslog Use the server's API to extract data for indexing Use Splunk DB Connect to pull data from the server's SQL database. Have the application send data directly to Splunk using HTTP Event Collector (HEC).