All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm working to modify UF configs (inputs.conf I believe) on masse using the Splunk Cloud platform. The first idea I had was to use a GPO, but we have servers outside the domain.
I want to  extract that BID@ from the  log. and for other logs the external ID will be different so what will be the regular expression to get this in the table.   { [-]    logger: org.mule.servic... See more...
I want to  extract that BID@ from the  log. and for other logs the external ID will be different so what will be the regular expression to get this in the table.   { [-]    logger: org.mule.service.http.impl.service.HttpMessageLogger.bmw-crm-wh-xl-gcdm-api-httpListenerConfig    message: LISTENER POST /api/v1/leads HTTP/1.1 X-SSL-Client-Verify: NONE Host: crm-il-api-prod.bmwgroup.com X-Real-IP: 35.242.211.49 X-Forwarded-For: 3.64.37.232, 35.242.211.49 X-Forwarded-Proto-Real: https Content-Length: 8796 X-Forwarded-Port: 443 X-Forwarded-Proto: https Content-type: application/json Accept: application/json X-c2b-External-Id: BID@1686598556eIiVYNd6BnktQwdOVCO User-Agent: Apache-HttpClient/4.5.13 (Java/1.8.0_341) Accept-Encoding: gzip,deflate x-c2b-request-id: rrt-6355934869680509287-c-geu3-17546-27196975-4 X-c2b-clientId: bmwdigital X-c2b-clientVariantId: DE-de Authorization: Basic SUxfR0NETV9QUkQ6dkZGczNpQk5OeVVFcVBWUzJ0NWJEdmQ4N1JGcEt4d2ZrYnJzbzZxdG81   
Hello guys, do you have example of script or curl commands using REST API to add data? There is https://docs.splunk.com/Documentation/Splunk/9.0.4/RESTREF/RESTinput#data.2Finputs.2Fmonitor but how ... See more...
Hello guys, do you have example of script or curl commands using REST API to add data? There is https://docs.splunk.com/Documentation/Splunk/9.0.4/RESTREF/RESTinput#data.2Finputs.2Fmonitor but how to specify serverclass? Thanks for your help.  
Hello all, i'm sure tje answer exists somewhere but i can't find it... As you can see, i start with this powerfull tool, and i need help. I have logs with FIELD1 and FIELD2 which concern the sameth... See more...
Hello all, i'm sure tje answer exists somewhere but i can't find it... As you can see, i start with this powerfull tool, and i need help. I have logs with FIELD1 and FIELD2 which concern the samething (IP ADDRESS). I need to chart a count of each line of log where FIELD1="A" OR FIELD2="B" in a bargraph by FIELD1 and FIELD2. So to see count of log by IP Address (which are in two differents fields. I hope i'm understandable... I stoped here (that display only count for the field i count by)... index="XXX" FIELD1="A" OR FIELD2="B" | chart count(eval(FIELD1="A")) AS "AnswerA", count(eval(FIELD2="B")) as "AnswerB" by ???   Many thanks!
Hello Everyone, I know it's possible to remove real-time presets from timepicker on classic dashboards as a default settings. And that what I did previously. But is it possible to do the same on das... See more...
Hello Everyone, I know it's possible to remove real-time presets from timepicker on classic dashboards as a default settings. And that what I did previously. But is it possible to do the same on dashboard studio ? I realize that some of my users used real-time when they started using dashboard studio... Thanks !
Hi all , I am trying to get the average of sum of last 5 weeks data by each store .  As in if today's monday I want values from last 5 mondays (except todays) to be summed up and then averaged . Thi... See more...
Hi all , I am trying to get the average of sum of last 5 weeks data by each store .  As in if today's monday I want values from last 5 mondays (except todays) to be summed up and then averaged . This average value to be displayed for each store . index=monitoring  field consisting the value TestResults.total_count
Hi, we are using syslog-ng to collect logs at syslog server and where we have installed Universal forwarder component with version 8.2.8 to forward the logs to Cribl workers. Now, during VA scan we... See more...
Hi, we are using syslog-ng to collect logs at syslog server and where we have installed Universal forwarder component with version 8.2.8 to forward the logs to Cribl workers. Now, during VA scan we received report stating that SSL certificate expired/with wrong hostname.  So, we received renewed SSL certificate from the project and replaced it under cacert.pem which is located under /opt/splunkforwarder/etc/auth folder and I have restarted the service. Once done, we informed team to perform scan again. AGain its still pointing to old one and getting same vulnerability. So, we are not sure whether we need to update any other .pem files such as server.pem or ca.pem. Can you please help us here?   Regards, Gayathri
Hi, I'm adding AppDynamics to our android project. We have different build variants, e. g. "release", "debug", "inhouse", ... and we also have different "accountName" and "licenseKey" for each varian... See more...
Hi, I'm adding AppDynamics to our android project. We have different build variants, e. g. "release", "debug", "inhouse", ... and we also have different "accountName" and "licenseKey" for each variant. I wanted to know how can I provide variant specific config for "adeum" config block in build.gradle file.  Note: I'm able to switch between App Keys when initializing instrumentation in launcher activity. but I don't know how to switch between adeum gradle plugin config for each build type.  adeum { account { name 'account2' licenseKey 'license2' } } adeum { account { name 'account1' licenseKey 'license1' } }
Hi all  I have a dashboard that i need to build to show number of Helpdesk calls for : 1) year to date 2) Average monthly 3) Average daily  i have the query set up and  I've selected 'year to da... See more...
Hi all  I have a dashboard that i need to build to show number of Helpdesk calls for : 1) year to date 2) Average monthly 3) Average daily  i have the query set up and  I've selected 'year to date'  I've done a | stats count and saved to Dashboard  When i save this onto the dashboard the time has defaulted to last 24 hours. I've gone into the source editor and removed queryparameters but this hasn't helped i think i need to set the time on my query (at the top) - year to date Can some help me with this code ?    Many thanks   P  
Hi All, I have 3 API's  1. in first API the status are code 200 & 403 as a success reaming all status codes are failure  2. in 2nd & 3rd API's only 200 is the success  remaining all codes are fail... See more...
Hi All, I have 3 API's  1. in first API the status are code 200 & 403 as a success reaming all status codes are failure  2. in 2nd & 3rd API's only 200 is the success  remaining all codes are failure  I need to show graph as a line chart with "Y" axis as success percentage " 0 to 100" in X-axis need to show time  I have to use below time chart command like... |timechart span=5m eval(if(count>10, round(mean(status),2), 100)) as percentage by countryCode useother=false limit=100   please help on this 
Hi community, There are a lot of articles videos in youtube etc but at some point it is becoming so so confusing so i'm asking for a little help here. Topic: I want to use a syslog ng server in Ubu... See more...
Hi community, There are a lot of articles videos in youtube etc but at some point it is becoming so so confusing so i'm asking for a little help here. Topic: I want to use a syslog ng server in Ubuntu in order Trimming  and send logs to SPLUNK What i have done so far:   Installed an Ubuntu Server (Ubuntu 22.04.2 LTS) Installed the universal forwarder (splunkforwarder-9.1.0.1-77f73c9edb85-linux-2) Installed syslog-ng Configured inputs.conf in /opt/splunkforwarder/etc/apps/search/local # FortiGate [monitor:///root/syslog/logs/fortinet/fortigate/*/*.log] sourcetype = fgt_log index = fortigate disabled = false host_segment = 6 Configured outputs.conf in /opt/splunkforwarder/etc/system/local [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = X.X.X.X.9997 [tcpout-server://1X.X.X.X.:9997] useACK = true useSSL = true clientCert = $SPLUNK_HOME/etc/auth/XXX/server.pem sslRootCAPath = $SPLUNK_HOME/etc/auth/XXX/cacert.pem[tcpout-server://X.X.X.X:9997] sslVerifyServerCert = true sslAltNameToCheck = XXXX Configured the certs What is wrong?:  There are a lot of output.conf and input.conf directories. Which is the correct one ? In SPLUNK i can see that logs are coming using index=_internal  but there are not the logs of fortigate 07-11-2023 12:48:48.579 +0200 INFO AutoLoadBalancedConnectionStrategy [2746 TcpOutEloop] - Found currently active indexer. Connected to idx=10.10.10.203:9997:0, reuse=1. date_hour = 12date_mday = 11date_minute = 48date_month = julydate_second = 48date_wday = tuesdaydate_year = 2023date_zone = 120eventtype = splunkd-loghost = syslogsrvindex = _internallinecount = 1punct = --_::._+____[_]_-____.___=...::,_=.source = /opt/splunkforwarder/var/log/splunk/splunkd.logsourcetype = splunkdsplunk_server = splunktimeendpos = 29timestartpos = 0 Also i noticed that some of the logs of fortigate are under index=main.
I need help creating a regex that extracts subnet masks
Hi, I have pushed a data from a file into Splunk. The size of the file is 94921b but when I pushed into Splunk, the size of the index from _internal is 90965b. The index I have used to push the da... See more...
Hi, I have pushed a data from a file into Splunk. The size of the file is 94921b but when I pushed into Splunk, the size of the index from _internal is 90965b. The index I have used to push the data is a brand new index created. This is how I compared between the size of file and size of index in Splunk I have created a log file which is 94921b in size. I used stat log_100kb.log Results File: 'log_100kb.log'  Size: 94921           Blocks: 200        IO Block: 32768  regular file I pushed this log file using Opentelemetry to Splunk. I used "index=_internal source=*license_usage.log type="Usage"" to check on the size of the data being pushed into that the index I have created. Results from the _internal index 07-11-2023 01:51:44.679 -0700 INFO  LicenseUsage - type=Usage s="http:100kb_logs" idx="100kb_logs" b=90965 ....... May I know why is the size different between the file and the index, please? Thank you.  
I want to create a dashboard to monitor pods, such as  CPU utilization, Memory utilization, the namespace etc, what do I need?
Hi,   I'm having some issues with the EULA on this https://github.com/splunk/SA-ctf_scoreboard.   I'm not quite sure what data should be filled into the lookup editor. I'm trying to setup boss o... See more...
Hi,   I'm having some issues with the EULA on this https://github.com/splunk/SA-ctf_scoreboard.   I'm not quite sure what data should be filled into the lookup editor. I'm trying to setup boss of the soc  
Running machine agent: 2023-07-11 01:36:28.654 Using Agent Version [Machine Agent v23.6.0.3657 GA compatible with 4.4.1.0 Build Date 2023-06-22 09:28:42] .. .. 2023-07-11 01:36:33.954 Started A... See more...
Running machine agent: 2023-07-11 01:36:28.654 Using Agent Version [Machine Agent v23.6.0.3657 GA compatible with 4.4.1.0 Build Date 2023-06-22 09:28:42] .. .. 2023-07-11 01:36:33.954 Started AppDynamics Machine Agent Successfully. Added custom tags to the configuration as described here and validated they show up in the UI: Running an API call: <appdynamics-controller>/controller/sim/v2/user/machines?type=CONTAINER&format=LITE&offset=0&limit=-1  get the vm back but the tags are empty: {"hostId":"aaa","name":"aaa","hierarchy":[],"properties":{"Processor|Logical Core Count":"2","vCPU":"2","Processor|Physical Core Count":"2","OS|Architecture":"x86_64","Hostname":"aaa","Bios|Version":"1.15.0-1","AppDynamics|Agent|Agent version":"Machine Agent v23.6.0.3657 GA compatible with 4.4.1.0 Build Date 2023-06-22 09:28:42","AppDynamics|Agent|Install Directory":"/root/appd_agent","OS|Kernel|Release":"5.14.0-307.el9.x86_64","AppDynamics|Agent|Build Number":"cdd5a21","AppDynamics|Machine Type":"NON_CONTAINER_MACHINE_AGENT","OS|Kernel|Name":"Linux","AppDynamics|Agent|Machine Info":"os.name=Linux|os.arch=amd64|os.version=5.14.0-307.el9.x86_64","Total|CPU|Logical Processor Count":"2","AppDynamics|Agent|JVM Info":"java.vm.name=OpenJDK 64-Bit Server VM|java.vendor=Azul Systems, Inc.|java.version=11.0.19|user.language=en|user.country=US|user.variant=unknown"}, "tags":{},"agentConfig":{"rawConfig":{}},"id":16583826,"memory":{},"volumes":[],"cpus":[],"networkInterfaces":[],"controllerConfig":{"rawConfig":{}},"simEnabled":true,"simNodeId":23402361,"dynamicMonitoringMode":"KPI","type":"PHYSICAL","historical":false}, *as you can see -> "tags":{} Some additional logs from the agent: [system-thread-0] 11 Jul 2023 01:37:04,274 DEBUG ConfigurationManager - Building configuration types for 'ServerMonitoring' -> {samplingInterval=30000, networkMonitorConfig={maxNumberNetworks=5, whitelistSelectorRegex=, blacklistSelectorRegex=^veth.*|^vnet.*}, defaultDiskSectorSize=512, memoryMonitorConfig={samplingInterval=3000}, basicEnabled=true, volumeMonitorConfig={maxNumberVolumes=5, whitelistSelectorRegex=, blacklistSelectorRegex=^/var/lib/docker/.*, samplingInterval=3000}, processMonitorConfig={maxClassIdLength=50, processSelectorRegex=^.+[^]]$, minLiveTimeMillisBeforeMonitoring=60000, maxNumberMonitoredClasses=20, defaultProcessClassSelector=}, percentileMonitorConfig={percentileEnabled=true}, tags={environment=[production], testingTagKey=[testingTagValue]}, cpusMonitorConfig={samplingInterval=3000}} Should I add something to the API call? Is it a bug? Thx!
There are over 10000 events and I want to extract events of 100 random Users. Is there any simple way to extract this? Thanks in advance!
here is field "http_x_forwarded_for="222.xx.xx.xx, 122.211.xx.xx" i have try: | rex field=_raw "http_x_forwarded_for\s*=\s*(?<ip_address>[^,\s]+)" | table ip_address But it not works, pls he... See more...
here is field "http_x_forwarded_for="222.xx.xx.xx, 122.211.xx.xx" i have try: | rex field=_raw "http_x_forwarded_for\s*=\s*(?<ip_address>[^,\s]+)" | table ip_address But it not works, pls help !
Hi Is anybody can do send me an inventory of the root cause which can provoke an interruption of the Splunk indexation ? Thanks