All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to send Post request in Storage Password service but it is giving 404 error and sending request on 8000 port and not on 8089 port. Below is the sample code :   let storage_pwd = new sp... See more...
I am trying to send Post request in Storage Password service but it is giving 404 error and sending request on 8000 port and not on 8089 port. Below is the sample code :   let storage_pwd = new splunk_js_sdk.Service.StoragePassword(splunk_js_sdk_service, 'test', application_name_space); storage_pwd.post('test',{name: "Splunker", realm: "SDK", password: "password_here"}, function(e,r){ console.log(e,r); })   common.js:1035 POST http://localhost:8000/en-US/splunkd/__raw/servicesNS/nobody/test/storage/passwords/test/test?output_mode=json 404 (Not Found)   Any help will be appreciable.
I would like to run a query for any user additions to privileged Active Directory groups. I am storing the AD groups of interest in Lookup file titled DomainPrivilegedGroups.csv. The definition has a... See more...
I would like to run a query for any user additions to privileged Active Directory groups. I am storing the AD groups of interest in Lookup file titled DomainPrivilegedGroups.csv. The definition has also been defined with the same name of DomainPrivilegedGroups.csv. At this time, the Lookup file contains 16 rows and this is likely to grow in the future. The Lookup file contains one column titled GroupName.  My eventual search will look for any events where EventID=4728 OR EventID=4732 OR EventID=4756. For now, I'm just trying to get the basic search working and therefore I am running the below:      sourcetype="XmlWinEventLog" [ | inputlookup DomainPrivilegedGroups.csv | rename GroupName as Group_Name ]     I'm performing the rename action because I know that the events store the group name in an attribute titled Group_Name. I know that there are events containing one of the group names so I am expecting results to return.  Is there anything glaringly obvious I'm doing wrong here?  Another consideration is whether or not a Lookup file is the best option. From what I can see, there is no way to update a Lookup file and instead, when wanting to make any additions I would need to delete and re-create the Lookup file & definition. Is this correct?  Thanks in advance!
I have a lookup table that has a list of values in it similar to: id value 1 test_value1 2 test_value2   I can search for all logs that have any of these values in them by doing: i... See more...
I have a lookup table that has a list of values in it similar to: id value 1 test_value1 2 test_value2   I can search for all logs that have any of these values in them by doing: index=blah sourcetype=blah [|inputlookup test_values | rename value AS search | fields search | format] This creates a search such as: index=blah sourcetype=blah ( ( "test_value1" ) OR ( "test_value2" ) ) What I'm trying to figure out is how to know which value from the lookup table was responsible for matching each log. However, because I'm not searching for the values in any particular field in the logs, and the fact that the lookup table values might be substrings of a larger value in the logs, I don't think I can use "| lookup" in order to match back to the lookup table. Put another way that doesn't even necessarily need to involve a lookup table, if you had this search: index=blah sourcetype=blah *this* OR *that* Is there any way to create a new field within the matching logs that would contain either the string "this" or "that" depending on which one was the cause of the match?
Hello, I need to remove the values found (string) from another field. Ex.  FIELD1 - abcmailingxyz LIST - mailing, ... Using | eval foundIt=if(match(FIELD1,$LIST$),"X",".") I am able to determine... See more...
Hello, I need to remove the values found (string) from another field. Ex.  FIELD1 - abcmailingxyz LIST - mailing, ... Using | eval foundIt=if(match(FIELD1,$LIST$),"X",".") I am able to determine if the list of words are contained in the values for FIELD1 After the eval has found the match, foundIt=X, I need to remove the word "mailing" from the value of FIELD1. Result - abcxyz (or abc_xyz if we decide to use an underscore in between). Question. How do I take the value in LIST and remove it from the value in FIELD1, leaving the remaining letters behind? Thanks in advance. God bless, safe and healthy to you and yours, Genesius
The following query returns a result that is one hour off. | makeresults | eval timestr="2020-03-08T02:00:21" | eval unixtime=strptime(timestr, "%Y-%m-%dT%H:%M:%S") | eval convertedBackToString=s... See more...
The following query returns a result that is one hour off. | makeresults | eval timestr="2020-03-08T02:00:21" | eval unixtime=strptime(timestr, "%Y-%m-%dT%H:%M:%S") | eval convertedBackToString=strftime(unixtime, "%Y-%m-%dT%H:%M:%S") | table timestr, unixtime, convertedBackToString If I change the date or hour, it works correctly.  But any time (I didn't try them all) in the 2 o'clock range and strptime returns the wrong value.  This happens on Splunk Enterprise   8.1.3 and my previous version which I think was 8.0.2.  This works correctly on 7.3.11. Can somebody confirm this is a bug? Thanks, Paul  
Hi,    Been struggling to get Workload Admission Rules working properly. After a bunch of testing and monitoring with the MC Workload Dashboards, I can see that only searches generated from the CM ... See more...
Hi,    Been struggling to get Workload Admission Rules working properly. After a bunch of testing and monitoring with the MC Workload Dashboards, I can see that only searches generated from the CM are applying the Admission Rule Filter. All other search heads ignore the filter.  Anyone run into this issue? On 8.1.3 Splunk Enterprise.    Thank you!   Chris  
ES erroring reg. The latest threat list can not be downloaded. I visited the site it is trying to access manually , no problem.  Please advise.    
Am I going crazy or is there legit not documentation on setting up a HF to point and send data to our cloud instance?     All the documentation I am finding is centered around a 100% on-prem setup.... See more...
Am I going crazy or is there legit not documentation on setting up a HF to point and send data to our cloud instance?     All the documentation I am finding is centered around a 100% on-prem setup.     Anyone have any luck with this?
Hi Splunk Team,   I have deployed Splunk App for Infrastructure and it is working well (essentially it deploys SCK to my server). I have been trying to update my ConfigMap called "sck-rendered-splu... See more...
Hi Splunk Team,   I have deployed Splunk App for Infrastructure and it is working well (essentially it deploys SCK to my server). I have been trying to update my ConfigMap called "sck-rendered-splunk-kubernetes-logging" to setup my source.container.conf file as follows:     <source> @id containers.log @type tail @label @splunk tag tail.containers.* path /var/log/containers/*.log, /var/lib/kubelet/pods/*/volumes/kubernetes.io~empty-dir/emptydir*/.*/*.log exclude_path ["/var/log/containers/fluentd*"] pos_file /var/log/splunk-fluentd-containers.log.pos tag kubernetes.* path_key source read_from_head true <parse> @type regex expression /^(?<log>.*)$/ time_key time time_type string time_format %Y-%m-%dT%H:%M:%SZ </parse> </source>     Please take a look at the entry from the Path line above (copied here):    /var/lib/kubelet/pods/*/volumes/kubernetes.io~empty-dir/emptydir*/.*/*.log   We are not correctly receiving legacy app logs in Splunk, which was the purpose of the above entry to the Path config.   To give some more info on our file structure, we have empty-dir mounted to legacy logging applications which is mounted to /opt/apps/weblogs on the pod and /var/lib/kubelet/pods on the Node (server) itself.   Here's an example of the absolute path to an application 's legacy logs that are hosted on a node: /var/lib/kubelet/pods/e36c6c23-325b-4e1e-b11e-26ff5abffd17/volumes/kubernetes.io~empty-dir/emptydir-my-test-app-rd-mytestapp/mytestapp   Here's a look at the filesystem mounted inside of the pod: sh-4.2$ cd /opt/apps/weblogs/ sh-4.2$ ls andrew.log mytestapp sh-4.2$ ls mytestapp mytestapp.log  other_directory   With our current path (pasted once more below this), we are able to tail the file "andrew.log" in Splunk, but we are not able to tail log files from mytestapp/ and downwards through the filesystem . Can you give some advice on how to update our path in the conf file, or any other place (e.g. the regex) to help us effectively ingest logs from both andrew.log (ie /opt/apps/weblogs) + any .log file that lives in mytestapp or recursively below that (other_directory and beyond)?  
Hello, I set up a trial account on Splunk cloud and followed the following documentation and set up Azure add-on. https://prd-p-ef4ph.splunkcloud.com/en-US/app/TA-MS-AAD/aad_app_registration But af... See more...
Hello, I set up a trial account on Splunk cloud and followed the following documentation and set up Azure add-on. https://prd-p-ef4ph.splunkcloud.com/en-US/app/TA-MS-AAD/aad_app_registration But after that when I clicked the configuration tab in Microsoft Azure add-on app, the loading circle just keeps spinning forever. What did I miss? I have found similar posts but they seem to be on-premise installations. In my case all I did was adding app in IAM and installed Azure add-on in Splunk cloud. Appreciate any advice  
Hi Team, I am having few devices located across the globe and want to monitor only during their Business hour timings only.   index=opennms | fieldformat Time=strftime(Time,"%Y-%m-%d %l:%M:%S") | ... See more...
Hi Team, I am having few devices located across the globe and want to monitor only during their Business hour timings only.   index=opennms | fieldformat Time=strftime(Time,"%Y-%m-%d %l:%M:%S") | table DEVICE,ALERT,SITECODE,COUNTRY,REGION,TIME_ZONE     Output : DEVICE ALERT SITECODE COUNTRY REGION TIME_ZONE FRNDG DEVICE DOWN NDG France Europe +01:00 FRNDG INTERFACE DOWN NDG France Europe +01:00 SGACB BGP DOWN ACB Singapore Asia Pacific +08:00 NGERH INTERFACE UTILIZATION ERH Nigeria Middle East / Africa +01:00 USBMT ISIS FLAP BMT United States North America -06:00 USBTN BGP DOWN BTN United States North America -06:00 SGSNG INTERFACE DOWN SNG Singapore Asia Pacific +08:00 USEMC DEVICE DOWN EMC United States North America -06:00 CAKRL INTERFACE DOWN KRL Canada North America -07:00 FRFOS BGP DOWN FOS France Europe +01:00   only during the 9AM to 5PM  of that business hour of that country Splunk should search for alerts or generate alerts and rest all time it should disable the alerting, is that possible.
How do I share my ML Toolkit models with others and change the settings? It always shows up as Private and I'm unable to change the settings.
The following search works well, but I'm not able to get it to occur for daily results with each day getting a row in the results.  How can I get it to provide daily results? index=cdb_summary sourc... See more...
The following search works well, but I'm not able to get it to occur for daily results with each day getting a row in the results.  How can I get it to provide daily results? index=cdb_summary source=CDM_*_Daily_Summary sourcetype=vuln_summary OR (sourcetype=assetmanager_summary devicetype!=npvw [| inputlookup IT_Systems where Status="Operational" OR Status="Modification" | fields fismaid] ) fismaid=*  | fields asset_id,hostname,ipaddress,ec2_instance_id,devicetype,os,everSeenBy,firstSeen,lastScan,last_scan_attempt_scandate,last_scan_attempt_credentialed,Credentialed_Scan,lastSeen,fismaid,CriticalCount,HighCount,Criticals,Highs,TotalAssetScore ,AuthenticationRequired,CriticalAssetScore,HighAssetScore | stats values(*) as * by asset_id | eval elast=strptime(lastSeen,"%Y-%m-%d %H:%M:%S") | where elast>=relative_time(now(),"@mon") | eval AutoFail=if(AuthenticationRequired=="1" AND (Credentialed_Scan=="false" OR isnull(Credentialed_Scan)),1,0) | eval CriticalPoints=CriticalCount*12, HighPoints=HighCount*4 | eval CriticalPoints=if(CriticalPoints>60,60,CriticalPoints), HighPoints=if(HighPoints>40,40,HighPoints) | eval PointsLost=CriticalPoints + HighPoints | eventstats sum(PointsLost) as TotalPointsLost,sum(TotalAssetScore) as TotalScore, dc(asset_id) as TotalAssets, sum(AutoFail) as AutoFailTotal, count(eval(isnull(Credentialed_Scan))) as MissingScans | eval Percent_to_AutoFails=round(AutoFailTotal/TotalAssets*100) | eval Percent_MissingScans=round(MissingScans/TotalAssets*100) | eval Percent_PointsLost=round(TotalPointsLost/TotalAssets) | eval VULN_Score= round(TotalScore/TotalAssets) | lookup IT_Systems fismaid output Title , hva | replace 1 with "True", 0 with "False" in hva | rename hva as "High Value Assets" | table VULN_Score, Percent_MissingScans,TotalAssets,Percent_PointsLost | stats values(*) as *   I tried adding | bin and by bucket to get the below search, which returns zero results, but does match on events.     index=cdb_summary source=CDM_*_Daily_Summary sourcetype=vuln_summary OR (sourcetype=assetmanager_summary devicetype!=npvw [| inputlookup IT_Systems where Status="Operational" OR Status="Modification" | fields fismaid] ) fismaid=* | bin span=1d _time | fields asset_id,hostname,ipaddress,ec2_instance_id,devicetype,os,everSeenBy,firstSeen,lastScan,last_scan_attempt_scandate,last_scan_attempt_credentialed,Credentialed_Scan,lastSeen,fismaid,CriticalCount,HighCount,Criticals,Highs,TotalAssetScore ,AuthenticationRequired,CriticalAssetScore,HighAssetScore | stats values(*) as * by asset_id by _time | eval elast=strptime(lastSeen,"%Y-%m-%d %H:%M:%S") | where elast>=relative_time(now(),"@mon") | eval AutoFail=if(AuthenticationRequired=="1" AND (Credentialed_Scan=="false" OR isnull(Credentialed_Scan)),1,0) | eval CriticalPoints=CriticalCount*12, HighPoints=HighCount*4 | eval CriticalPoints=if(CriticalPoints>60,60,CriticalPoints), HighPoints=if(HighPoints>40,40,HighPoints) | eval PointsLost=CriticalPoints + HighPoints | eventstats sum(PointsLost) as TotalPointsLost,sum(TotalAssetScore) as TotalScore, dc(asset_id) as TotalAssets, sum(AutoFail) as AutoFailTotal, count(eval(isnull(Credentialed_Scan))) as MissingScans | eval Percent_to_AutoFails=round(AutoFailTotal/TotalAssets*100) | eval Percent_MissingScans=round(MissingScans/TotalAssets*100) | eval Percent_PointsLost=round(TotalPointsLost/TotalAssets) | eval VULN_Score= round(TotalScore/TotalAssets) | lookup IT_Systems fismaid output Title , hva | replace 1 with "True", 0 with "False" in hva | rename hva as "High Value Assets" | table VULN_Score, Percent_MissingScans,TotalAssets,Percent_PointsLost | stats values(*) as *
As per below screen shot i created toggle tabs and when i used the in by below panel results are not poplutating. Please let me know what i am doing wrong ?
Afternoon All, Have been playing with a search that will eventually become a saved search within Splunk ES. Idea is for the search to run and pull in observables that are populated in the ip_intel l... See more...
Afternoon All, Have been playing with a search that will eventually become a saved search within Splunk ES. Idea is for the search to run and pull in observables that are populated in the ip_intel lookup file and then populate an index. So far I have this which works fine: | inputlookup ip_intel append=true | dedup ip | collect index=backup_ti source=daily_ip_intel The problem is, the inputlookup automatically displays all rows within the file. Im looking to have this search run once a day only updating the index with new observables rather than duplicating or overwriting the existing data. Is there a way to do this? The timestamp against the indicators within that file are messy so its not possible to exclude them via a relative time sub search for example.
Hello All, I am trying to get a total number of bytes/MB/GB  uploaded per application in Splunk. Can't seem to find the correct search, I did find file_size. Here is the search that I started out ... See more...
Hello All, I am trying to get a total number of bytes/MB/GB  uploaded per application in Splunk. Can't seem to find the correct search, I did find file_size. Here is the search that I started out with; sourcetype=x index=x_method="Explicit Proxy" | table app,category,activity, user | dedup user | stats count by app, Gives me the number of users per app, need to number of bytes uploaded per app. Then this search, not sure if the totals are correct or not. sourcetype=x index=y  access_method="Explicit Proxy" activity=upload | stats sum(file_size) by app Thanks!  
Hi Splunk Community! I'm trying to get the context of an error. Here is a snippet of the logs:     2021-03-21 11:36:43,045 [thread-1] blablabla orderid 12345 2021-03-21 11:36:43,045 [thread-2] b... See more...
Hi Splunk Community! I'm trying to get the context of an error. Here is a snippet of the logs:     2021-03-21 11:36:43,045 [thread-1] blablabla orderid 12345 2021-03-21 11:36:43,045 [thread-2] blablabla orderid 23456 2021-03-21 11:36:43,045 [thread-3] blablabla orderid 34567 2021-03-21 11:36:43,046 [thread-1] blablabla ... 2021-03-21 11:36:43,047 [thread-1] WARN [org.hibernate.engine.jdbc.spi.SqlExceptionHelper] - SQL Error: 1366, SQLState: HY000 2021-03-21 11:36:43,048 [thread-1] ERROR [org.hibernate.engine.jdbc.spi.SqlExceptionHelper] - Incorrect string value: '\xE2\x80\xAFfro...' for column 'request' at row 1 2021-03-21 11:36:43,050 [thread-1] ERROR [class-1] - org.hibernate.exception.GenericJDBCException: could not execute statement javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: could not execute statement <multi-line stack trace> ...     The "context" I'm trying to get is: >> For orderid 12345, error "SQL Error: 1366, SQLState: HY000" while trying to write '\xE2\x80\xAFfro...' for column 'request' in "class-1". As you can see, the order and the error messages are all on different lines.  I know, it's not ideal, but that's what I have to deal with right now. Is there a way to get this summary? My idea is: find the error get the thread name find logs with the same thread name in the past few seconds to get the orderid find logs with the same thread name in the next few seconds to get the character(s), the column name and the class name Something to keep in mind on our setup: the "_time" is the indexed time, not the real log time.  Because of that, sometimes there are logs from the previous day having the same "_time" as logs from the current day. So, I've used extracted fields to get the date/time from the log entry. I have been able to extract fields for the date/time (rm_datetime), the thread name (rm_threadname), the log message (rm_logmessage) I've tried using: transaction to get the next line grouped by the rm_threadname, but transaction can only go in 1 direction, i.e. either up or down. In this case, I'm looking for "SQL Error: 1366". So, I have to walk up and down to get the full context.     index=main | transaction rm_threadname startswith="[org.hibernate.engine.jdbc.spi.SqlExceptionHelper] - Incorrect string value:" maxevents=5 | rex field=_raw "(?<FirstFewLines>(.*[\n]){2})" | table FirstFewLines​     map based on the rm_threadname of the previous search, but I'm failing to get log entries around the rm_datetime of the error (a few seconds before and a few seconds after)     index=main "SQL Error: 1366" | eval errordateserial=strptime(rm_datetime, "%Y-%m-%d %H:%M:%S,%Q"), fromdateserial=strptime(rm_datetime, "%Y-%m-%d %H:%M:%S,%Q") - 2, todateserial=strptime(rm_datetime, "%Y-%m-%d %H:%M:%S,%Q") + 1 | table rm_datetime, rm_threadname, rm_logmessage, errordateserial, fromdateserial, todateserial | map [ search index=main rm_threadname=$rm_threadname$ | eval datetime=strptime(rm_datetime, "%Y-%m-%d %H:%M:%S,%Q") | eval datetime >= $fromdateserial$ | eval datetime <= $todateserial$ | eval errordateserial=$errordateserial$, fromdateserial=$fromdateserial$, todateserial=$todateserial$ ]​     subsearches using the rm_threadname, but I'm failing to get log entries around the rm_datetime of the error (a few seconds before and a few seconds after)     index=main [ search index=main "SQL Error: 1366" | fields rm_datetime, rm_threadname | format ] ​     What am I missing? Thanks in advance.
I have an error in authentication in the API, I do some authentications successfully, but then it gives an error HTTP 401 -- Unauthorized. The problem may be that the connection is not closed, but I... See more...
I have an error in authentication in the API, I do some authentications successfully, but then it gives an error HTTP 401 -- Unauthorized. The problem may be that the connection is not closed, but I don't know how I could do it and if this is really the cause of the error. private void authentication() { Service.setSslSecurityProtocol(SSLSecurityProtocol.TLSv1); serviceSplunk = new Service(configMap.getSplunkHost(), configMap.getSplunkPort()); String credentials = configMap.getSplunkUserName() + ":" + configMap.getSplunkPassword(); String basicAuthHeader = Base64.encode(credentials.getBytes()); serviceSplunk.setToken("Basic " + basicAuthHeader); } Help me, please!
I  am a developer and I have crated a  splunk dashboard using simpleXML.  The entire code must be saved in the backened in some .conf file is what I believe. Can anyone guide me where can I find it... See more...
I  am a developer and I have crated a  splunk dashboard using simpleXML.  The entire code must be saved in the backened in some .conf file is what I believe. Can anyone guide me where can I find it / the folder path ( like opt/etc/user/ ......??????? ) On the same question, I too would like to know the location where my Scheduled Reports, Alerts, Macros etc..are stored. Thank you.
Hello,  we are using a standalone machine agent to monitor a java application which is deployed in a Docker container. The machine agent is installed in a standalone container. Our Dockerfile, wher... See more...
Hello,  we are using a standalone machine agent to monitor a java application which is deployed in a Docker container. The machine agent is installed in a standalone container. Our Dockerfile, where the directory ./AppServerAgent contains the machine agent: FROM openjdk WORKDIR /app ENV APPDYNAMICS_AGENT_APPLICATION_NAME=petclinic ENV APPDYNAMICS_AGENT_TIER_NAME=petclinic-tier ENV APPDYNAMICS_AGENT_ACCOUNT_NAME=customer1 ENV APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY=41130855-4203-471e-a836-3d241c9e8794 ENV APPDYNAMICS_CONTROLLER_HOST_NAME=ec2-3-122-240-100.eu-central-1.compute.amazonaws.com ENV APPDYNAMICS_CONTROLLER_PORT=8090 ENV APPDYNAMICS_CONTROLLER_SSL_ENABLED=false ENV APPDYNAMICS_AGENT_NODE_NAME=petclinic-node COPY ./AppServerAgent/ /opt/appdynamics/ COPY ./target ./target CMD ["java", "-javaagent:/opt/appdynamics/javaagent.jar", "-jar", "target/spring-petclinic-2.4.2.jar"] We use the official docker image of AppDynamics from the Docker Hub:  docker pull store/appdynamics/machine:4.5 And run the container with the following parameters referring to our on-prem AppD Controller: docker run -d \ -e APPDYNAMICS_CONTROLLER_HOST_NAME=ec2-3-122-240-100.eu-central-1.compute.amazonaws.com \ -e APPDYNAMICS_CONTROLLER_PORT=8090 \ -e APPDYNAMICS_CONTROLLER_SSL_ENABLED=false \ -e APPDYNAMICS_AGENT_ACCOUNT_NAME=customer1 \ -e APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY=41130855-4203-471e-a836-3d241c9e8794 \ -e MACHINE_AGENT_PROPERTIES="-Dappdynamics.sim.enabled=true -Dappdynamics.docker.enabled=true" \ -v /proc:/hostroot/proc:ro -v /sys:/hostroot/sys:ro -v /etc:/hostroot/etc:ro -v /var/run/docker.sock:/var/run/docker.sock \ appdynamics/machine:4.5 Although the application is instrumented successfully, we can't view any Container. We have enabled Server and Docker Visibility and our License is also fine.  What are we missing? Thanks in advance and have a nice day!