All Topics

Top

All Topics

Hi Team, what is the Events-per-second (EPS) in flat file with universal forwarder?
Hi Team, As checked our Splunk ITSI default schedule backup is taking more than 10 hours to complete, could you please assit us on this. Thanks
Does Anything Special need to be done when Installing Splunk 9.1.1 on RHEL 9.3? Or just follow the steps and it will be good to go? Thanks -David 
When setting up this receiver,  otel fails to start with this msg: Error: failed to resolving: yaml: line 89: did not find expected key Line 89 is smartagent/snmp: below is the collector config fo... See more...
When setting up this receiver,  otel fails to start with this msg: Error: failed to resolving: yaml: line 89: did not find expected key Line 89 is smartagent/snmp: below is the collector config for this snmp block in otel smartagent/snmp:   type: telegraf/snmp   agents:        - "172.xx.11.xx:xx2"    version: 2   community: "public"   fields:      name: "uptime"     oid: ".1.3.6.1.2.1.1.3.0"
Hello, I am receiving darktrace events through my Edge Processor as a Forwarder and I am a bit new to the SPL2 pipeline. It can probably be solved by transforming something in the pipeline. The pro... See more...
Hello, I am receiving darktrace events through my Edge Processor as a Forwarder and I am a bit new to the SPL2 pipeline. It can probably be solved by transforming something in the pipeline. The problem is that I am indexing events with a _time of -5h and a 2h difference from the event time stamp. Here is an example:   Time in the Edge Processor: It should be noted that the rest of the events that I ingest through this server are arriving at the correct time.
I see there is a premium app to show CDR data from CUCM but is there a way to view this data from running a search without that app?  I have Splunk setup as a billing server in CUCM but am unable to ... See more...
I see there is a premium app to show CDR data from CUCM but is there a way to view this data from running a search without that app?  I have Splunk setup as a billing server in CUCM but am unable to find any CDR data.  We are using Enterprise on-prem.
Hi Team  I want to know if it is possible to find the count of specific fields and show them in different columns. Example :      For the above example, i want the result in the below format... See more...
Hi Team  I want to know if it is possible to find the count of specific fields and show them in different columns. Example :      For the above example, i want the result in the below format: | Date | File RPWARDA | Count of File SPWARAA |  Count of File SPWARAA | Count of File SPWARRA | Diff (RPWARDA   - ( SPWARAA +SPWARRA ) ) | |2024/04/10 | 49 | 38 | 5 | 6 |   Is it possible using a splunk query ?    Original query :  index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA)) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | eval DIR = if(file="RPWARDA" ,"IN","OUT") | convert timeformat="%Y/%m/%d" ctime(_time) AS Date | stats count by Date , file , DIR  
I am trying to access ACS services (Admin config services) on splunk cloud trial , But not able to do it ,  After acs login , i am getting an error : linuxadmin@linuxxvz:~$ acs login --token-... See more...
I am trying to access ACS services (Admin config services) on splunk cloud trial , But not able to do it ,  After acs login , i am getting an error : linuxadmin@linuxxvz:~$ acs login --token-user test_acs_user Enter Username: sc_admin Enter Password: An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=1ccdf228-d137-923d-be35-9eaad590d15c). Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips. { "code": "500-internal-server-error", "message": "An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=1ccdf228-d137-923d-be35-9eaad590d15c). Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips." } Error: stack login failed: POST request to "https://admin.splunk.com/prd-p-pg6yq/adminconfig/v2/tokens" failed, code: 500 Internal Server Error linuxadmin@linuxvm:~$ acs login --token-user test_acs_user Enter Username: sc_admin Enter Password: An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=5073a1f1-79d0-9ac1-9d9a-675df569846f). Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips. { "code": "500-internal-server-error", "message": "An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=5073a1f1-79d0-9ac1-9d9a-675df569846f). Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips." } Error: stack login failed: POST request to "https://admin.splunk.com/prd-p-pg6yq/adminconfig/v2/tokens" failed, code: 500 Internal Server Error Can some one please help here .
How Splunk admin give access for a service account AB-CDRWYVH-L. Access for-  Splunk API read write access
I've installed Splunk Universal Forwarder 9.1.0 on a Linux server and configured batch mode for data log file monitoring. There are different types of logs which we monitoring with different filename... See more...
I've installed Splunk Universal Forwarder 9.1.0 on a Linux server and configured batch mode for data log file monitoring. There are different types of logs which we monitoring with different filenames. We observed too much CPU/Memory consumption by splunkd process when the input log files to be monitored are more ( > 1000K approx). All the input data logs files are new and total no. of events range would be 10 to 300. Few metirc logs: {"level":"INFO","name":"splunk","msg":"group=tailingprocessor, ingest_pipe=1, name=batchreader1, current_queue_size=0, max_queue_size=0, files_queued=0, new_files_queued=0","service_id":"infra/service/ok6qk4zudodbld4wcj2ha4x3fckpyfz2","time":"04-08-2024 20:33:20.890 +0000"} {"level":"INFO","name":"splunk","msg":"group=tailingprocessor, ingest_pipe=1, name=tailreader1, current_queue_size=1388185, max_queue_size=1409382, files_queued=18388, new_files_queued=0, fd_cache_size=63","service_id":"infra/service/ok6qk4zudodbld4wcj2ha4x3fckpyfz2","time":"04-08-2024 20:33:20.890 +0000"}   Please help me if there is any configuration tuning to limit the number of files to be monitored.
Hi @ITWhisperer  @gcusello @ITWhisperer  please help This is the other issue which is related to csv dataset and lookup dataset. From this SPL: source="cmkcsv.csv" host="DESKTOP" index="cmk" sou... See more...
Hi @ITWhisperer  @gcusello @ITWhisperer  please help This is the other issue which is related to csv dataset and lookup dataset. From this SPL: source="cmkcsv.csv" host="DESKTOP" index="cmk" sourcetype="cmkcsv" Getting output below Subscription  Resource  Key Vault  Secret  Expiration Date  Months BoB-foo  Dicore-automat  Dicore-automat-keycore Di core-tuubsp1sct  2022-07-28 -21 BoB-foo  Dicore-automat  Dicore-automat-keycore  Dicore-stor1scrt  2022-07-28 -21 BoB-foo  G01462-mgmt-foo  G86413-vaultcore  G86413-secret-foo   From this lookup: | inputlookup cmklookup.csv Getting output below Application environment appOwner Caliber Dicore - TCG foo@gmail.com Keygroup G01462 - QA goo@gmail.com Keygroup G01462 - SIT boo@gmail.com   Combine the two queries into one, where the output will only display results where the 'environment' and 'Resource' fields match. For instance, if 'G01462' matches in both fields across both datasets, it should be included in the output. How i can do this, could anyone help here to write spl. I wrote some of the Spls but it's not working for me. source="cmkcsv.csv" host="DESKTOP" index="cmk" sourcetype="cmkcsv" |join type=inner [ | inputlookup cmklookup.csv environment] source="cmkcsv.csv" host="DESKTOP" index="cmk" sourcetype="cmkcsv" | lookup cmklookup.csv environment AS "Resource" OUTPUT "environment"
Hi, We've just upgraded to to 9.2.0 which comes with a UI overhaul as detailed here. We previously had a default home dashboard set as a welcome/landing page for new users. With this new UI th... See more...
Hi, We've just upgraded to to 9.2.0 which comes with a UI overhaul as detailed here. We previously had a default home dashboard set as a welcome/landing page for new users. With this new UI the 'Quick Links' appear as default and you need to click on 'Dashboard' at the top to view the default dashboard. This isn't ideal as we want all users to see the default dashboard on login. Does anyone know any way we can change this? I don't want to set a different default app as having the apps list on the side bar is key. Thanks
Hello everyone I want to calculate the network address from an IP and a mask: IP = 192.168.1.10 Mask = 255.255.255.0 Desired result = 192.168.1.0 Unfortunately I can't find a function or method ... See more...
Hello everyone I want to calculate the network address from an IP and a mask: IP = 192.168.1.10 Mask = 255.255.255.0 Desired result = 192.168.1.0 Unfortunately I can't find a function or method to do this. I looked for the 'cidrmatch' function but it only seems to return a boolean. Is there another way? Thanks for your help!
Tie "ingest" can be list. But can't find in the application dashboard.
Hi all, I am ingesting data and I have  a problem : event example: field1 = /var/log/asas/log1.log field2 = /var/log/as/as/log2.log field3 = /var/log/as/as/log3.log in the sourcetype (props.c... See more...
Hi all, I am ingesting data and I have  a problem : event example: field1 = /var/log/asas/log1.log field2 = /var/log/as/as/log2.log field3 = /var/log/as/as/log3.log in the sourcetype (props.conf) I do it like this: ^.*field1 \=\s*(?<log1>.*?)\s*\n ^.*field2 \=\s*(?<log2>.*?)\s*\n ^.*field3 \=\s*(?<log3>.*?)\s*\n The problem is when the value of some field appears empty. In that case capture the following line. like this: source: field1 = /var/log/as/as/log1.log field2 =  field3 = /var/log/log/as/log3.log result: log2= field3 = /var/log/logs/log3.log   I'm sure there is a way to fix it and make the field appear empty, but I can't find it. Does anyone know how to do it?   BR JAR
Good Morning,  I'm working in a query to see which application is missing on each host.  Can you help me, please? For example Host     application             Guardicore  Host1 cortex         ... See more...
Good Morning,  I'm working in a query to see which application is missing on each host.  Can you help me, please? For example Host     application             Guardicore  Host1 cortex               Tenable                Trend Micro Host2 cortex              Tenable I need, it to show me what is missing In its example Guardicore y tenable   Regardes
Hi  We've setup the Data Model Wrangler App in our on-prem Search Head. We're running Splunk Core 9.0.3 and ES 7.0.1  The latest SA-cim-validator and CIM App are installed as per the installation n... See more...
Hi  We've setup the Data Model Wrangler App in our on-prem Search Head. We're running Splunk Core 9.0.3 and ES 7.0.1  The latest SA-cim-validator and CIM App are installed as per the installation notes. These apps are working as expected showing results from the validator app. I created an index from the Splunk WebUI visible under the DM Wrangler App called data_model_wrangler. We've scheduled the 3 saved searches that come with the App as per the instructions. We only see results out of 1 from the 3 saved searches from the DM Wrangler App.  Also the index created is empty. This being  data_model_wrangler_dm_index_sourcetype_field  The two other saved searches are: data_model_wrangler_field_quality data_model_wrangler_mapping_quality with errors: No results to summary index. All saved searches and indexes are enabled. Can anyone please suggest where we've gone wrong in setting this up? @nvonkorff  Thanks in advance
9.1.3/9.2.1 onwards slow indexer/receiver detection capability is fully functional now (SPL-248188, SPL-248140).   https://docs.splunk.com/Documentation/Splunk/9.2.1/ReleaseNotes/Fixedissues You c... See more...
9.1.3/9.2.1 onwards slow indexer/receiver detection capability is fully functional now (SPL-248188, SPL-248140).   https://docs.splunk.com/Documentation/Splunk/9.2.1/ReleaseNotes/Fixedissues You can enable it on forwarding side in outputs.conf maxSendQSize = <integer> * The size of the tcpout client send buffer, in bytes. If tcpout client(indexer/receiver connection) send buffer is full, a new indexer is randomly selected from the list of indexers provided in the server setting of the target group stanza. * This setting allows forwarder to switch to new indexer/receiver if current indexer/receiver is slow. * A non-zero value means that max send buffer size is set. * 0 means no limit on max send buffer size. * Default: 0 Additionally 9.1.3/9.2.1 and above will correctly log target ipaddress causing tcpout blocking.   WARN AutoLoadBalancedConnectionStrategy [xxxx TcpOutEloop] - Current dest host connection nn.nn.nn.nnn:9997, oneTimeClient=0, _events.size()=20, _refCount=2, _waitingAckQ.size()=4, _supportsACK=1, _lastHBRecvTime=Thu Jan 20 11:07:43 2024 is using 20214400 bytes. Total tcpout queue size is 26214400. Warningcount=20   Note: This config works correctly starting 9.1.3/9.2.1. Do not use it with 9.2.0/9.1.0/9.1.1/9.1.2( there is incorrect calculation https://community.splunk.com/t5/Getting-Data-In/Current-dest-host-connection-is-using-18446603427033668018-bytes/m-p/678842#M113450).
During graceful indexer/HF restart/stop (basically where ever splunktcp is configured) if you see last entries in metrics.log before splunk finally stops.  Where splunktcpin queue (name=splunktcpin)... See more...
During graceful indexer/HF restart/stop (basically where ever splunktcp is configured) if you see last entries in metrics.log before splunk finally stops.  Where splunktcpin queue (name=splunktcpin) shows current_size, largest_size, smallest_size has same value( but parsingqueue to indexqueue none blocked), TcpInputProcessor fails to drain splunktcpin queue despite parsingqueue or indexqueue are empty.    02-18-2024 00:54:28.370 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1507, largest_size=1507, smallest_size=1507 02-18-2024 00:54:28.370 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=7, current_size=40, largest_size=40, smallest_size=0 02-18-2024 00:54:28.368 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1148, largest_size=1148, smallest_size=1148 02-18-2024 00:54:28.368 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=7, current_size=40, largest_size=40, smallest_size=0 02-18-2024 00:53:57.364 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1507, largest_size=1507, smallest_size=1507 02-18-2024 00:53:57.364 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:53:57.362 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1148, largest_size=1148, smallest_size=1148 02-18-2024 00:53:57.362 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:53:26.372 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1507, largest_size=1507, smallest_size=1507 02-18-2024 00:53:26.372 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:53:26.370 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1148, largest_size=1148, smallest_size=1148 02-18-2024 00:53:26.370 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:52:55.371 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1507, largest_size=1507, smallest_size=0 02-18-2024 00:52:55.371 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:52:55.369 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1148, largest_size=1148, smallest_size=0 02-18-2024 00:52:55.369 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:52:24.397 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=splunktcpin, max_size_kb=500, current_size_kb=0, current_size=0, largest_size=30, smallest_size=0 02-18-2024 00:52:24.396 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:52:24.380 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=splunktcpin, max_size_kb=500, current_size_kb=0, current_size=0, largest_size=16, smallest_size=0 02-18-2024 00:52:24.380 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0   During graceful shutdown pipeline processors are expected to drain the queue. This issue is fixed in 9.2.1 and 9.1.4. 
Hi  I am not sure about this value risk score.  How do i create dashboard tile for this fields