All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How Splunk admin give access for a service account AB-CDRWYVH-L. Access for-  Splunk API read write access
I've installed Splunk Universal Forwarder 9.1.0 on a Linux server and configured batch mode for data log file monitoring. There are different types of logs which we monitoring with different filename... See more...
I've installed Splunk Universal Forwarder 9.1.0 on a Linux server and configured batch mode for data log file monitoring. There are different types of logs which we monitoring with different filenames. We observed too much CPU/Memory consumption by splunkd process when the input log files to be monitored are more ( > 1000K approx). All the input data logs files are new and total no. of events range would be 10 to 300. Few metirc logs: {"level":"INFO","name":"splunk","msg":"group=tailingprocessor, ingest_pipe=1, name=batchreader1, current_queue_size=0, max_queue_size=0, files_queued=0, new_files_queued=0","service_id":"infra/service/ok6qk4zudodbld4wcj2ha4x3fckpyfz2","time":"04-08-2024 20:33:20.890 +0000"} {"level":"INFO","name":"splunk","msg":"group=tailingprocessor, ingest_pipe=1, name=tailreader1, current_queue_size=1388185, max_queue_size=1409382, files_queued=18388, new_files_queued=0, fd_cache_size=63","service_id":"infra/service/ok6qk4zudodbld4wcj2ha4x3fckpyfz2","time":"04-08-2024 20:33:20.890 +0000"}   Please help me if there is any configuration tuning to limit the number of files to be monitored.
Hi @ITWhisperer  @gcusello @ITWhisperer  please help This is the other issue which is related to csv dataset and lookup dataset. From this SPL: source="cmkcsv.csv" host="DESKTOP" index="cmk" sou... See more...
Hi @ITWhisperer  @gcusello @ITWhisperer  please help This is the other issue which is related to csv dataset and lookup dataset. From this SPL: source="cmkcsv.csv" host="DESKTOP" index="cmk" sourcetype="cmkcsv" Getting output below Subscription  Resource  Key Vault  Secret  Expiration Date  Months BoB-foo  Dicore-automat  Dicore-automat-keycore Di core-tuubsp1sct  2022-07-28 -21 BoB-foo  Dicore-automat  Dicore-automat-keycore  Dicore-stor1scrt  2022-07-28 -21 BoB-foo  G01462-mgmt-foo  G86413-vaultcore  G86413-secret-foo   From this lookup: | inputlookup cmklookup.csv Getting output below Application environment appOwner Caliber Dicore - TCG foo@gmail.com Keygroup G01462 - QA goo@gmail.com Keygroup G01462 - SIT boo@gmail.com   Combine the two queries into one, where the output will only display results where the 'environment' and 'Resource' fields match. For instance, if 'G01462' matches in both fields across both datasets, it should be included in the output. How i can do this, could anyone help here to write spl. I wrote some of the Spls but it's not working for me. source="cmkcsv.csv" host="DESKTOP" index="cmk" sourcetype="cmkcsv" |join type=inner [ | inputlookup cmklookup.csv environment] source="cmkcsv.csv" host="DESKTOP" index="cmk" sourcetype="cmkcsv" | lookup cmklookup.csv environment AS "Resource" OUTPUT "environment"
Hi, We've just upgraded to to 9.2.0 which comes with a UI overhaul as detailed here. We previously had a default home dashboard set as a welcome/landing page for new users. With this new UI th... See more...
Hi, We've just upgraded to to 9.2.0 which comes with a UI overhaul as detailed here. We previously had a default home dashboard set as a welcome/landing page for new users. With this new UI the 'Quick Links' appear as default and you need to click on 'Dashboard' at the top to view the default dashboard. This isn't ideal as we want all users to see the default dashboard on login. Does anyone know any way we can change this? I don't want to set a different default app as having the apps list on the side bar is key. Thanks
Hello everyone I want to calculate the network address from an IP and a mask: IP = 192.168.1.10 Mask = 255.255.255.0 Desired result = 192.168.1.0 Unfortunately I can't find a function or method ... See more...
Hello everyone I want to calculate the network address from an IP and a mask: IP = 192.168.1.10 Mask = 255.255.255.0 Desired result = 192.168.1.0 Unfortunately I can't find a function or method to do this. I looked for the 'cidrmatch' function but it only seems to return a boolean. Is there another way? Thanks for your help!
Tie "ingest" can be list. But can't find in the application dashboard.
Hi all, I am ingesting data and I have  a problem : event example: field1 = /var/log/asas/log1.log field2 = /var/log/as/as/log2.log field3 = /var/log/as/as/log3.log in the sourcetype (props.c... See more...
Hi all, I am ingesting data and I have  a problem : event example: field1 = /var/log/asas/log1.log field2 = /var/log/as/as/log2.log field3 = /var/log/as/as/log3.log in the sourcetype (props.conf) I do it like this: ^.*field1 \=\s*(?<log1>.*?)\s*\n ^.*field2 \=\s*(?<log2>.*?)\s*\n ^.*field3 \=\s*(?<log3>.*?)\s*\n The problem is when the value of some field appears empty. In that case capture the following line. like this: source: field1 = /var/log/as/as/log1.log field2 =  field3 = /var/log/log/as/log3.log result: log2= field3 = /var/log/logs/log3.log   I'm sure there is a way to fix it and make the field appear empty, but I can't find it. Does anyone know how to do it?   BR JAR
Good Morning,  I'm working in a query to see which application is missing on each host.  Can you help me, please? For example Host     application             Guardicore  Host1 cortex         ... See more...
Good Morning,  I'm working in a query to see which application is missing on each host.  Can you help me, please? For example Host     application             Guardicore  Host1 cortex               Tenable                Trend Micro Host2 cortex              Tenable I need, it to show me what is missing In its example Guardicore y tenable   Regardes
Hi  We've setup the Data Model Wrangler App in our on-prem Search Head. We're running Splunk Core 9.0.3 and ES 7.0.1  The latest SA-cim-validator and CIM App are installed as per the installation n... See more...
Hi  We've setup the Data Model Wrangler App in our on-prem Search Head. We're running Splunk Core 9.0.3 and ES 7.0.1  The latest SA-cim-validator and CIM App are installed as per the installation notes. These apps are working as expected showing results from the validator app. I created an index from the Splunk WebUI visible under the DM Wrangler App called data_model_wrangler. We've scheduled the 3 saved searches that come with the App as per the instructions. We only see results out of 1 from the 3 saved searches from the DM Wrangler App.  Also the index created is empty. This being  data_model_wrangler_dm_index_sourcetype_field  The two other saved searches are: data_model_wrangler_field_quality data_model_wrangler_mapping_quality with errors: No results to summary index. All saved searches and indexes are enabled. Can anyone please suggest where we've gone wrong in setting this up? @nvonkorff  Thanks in advance
9.1.3/9.2.1 onwards slow indexer/receiver detection capability is fully functional now (SPL-248188, SPL-248140).   https://docs.splunk.com/Documentation/Splunk/9.2.1/ReleaseNotes/Fixedissues You c... See more...
9.1.3/9.2.1 onwards slow indexer/receiver detection capability is fully functional now (SPL-248188, SPL-248140).   https://docs.splunk.com/Documentation/Splunk/9.2.1/ReleaseNotes/Fixedissues You can enable it on forwarding side in outputs.conf maxSendQSize = <integer> * The size of the tcpout client send buffer, in bytes. If tcpout client(indexer/receiver connection) send buffer is full, a new indexer is randomly selected from the list of indexers provided in the server setting of the target group stanza. * This setting allows forwarder to switch to new indexer/receiver if current indexer/receiver is slow. * A non-zero value means that max send buffer size is set. * 0 means no limit on max send buffer size. * Default: 0 Additionally 9.1.3/9.2.1 and above will correctly log target ipaddress causing tcpout blocking.   WARN AutoLoadBalancedConnectionStrategy [xxxx TcpOutEloop] - Current dest host connection nn.nn.nn.nnn:9997, oneTimeClient=0, _events.size()=20, _refCount=2, _waitingAckQ.size()=4, _supportsACK=1, _lastHBRecvTime=Thu Jan 20 11:07:43 2024 is using 20214400 bytes. Total tcpout queue size is 26214400. Warningcount=20   Note: This config works correctly starting 9.1.3/9.2.1. Do not use it with 9.2.0/9.1.0/9.1.1/9.1.2( there is incorrect calculation https://community.splunk.com/t5/Getting-Data-In/Current-dest-host-connection-is-using-18446603427033668018-bytes/m-p/678842#M113450).
During graceful indexer/HF restart/stop (basically where ever splunktcp is configured) if you see last entries in metrics.log before splunk finally stops.  Where splunktcpin queue (name=splunktcpin)... See more...
During graceful indexer/HF restart/stop (basically where ever splunktcp is configured) if you see last entries in metrics.log before splunk finally stops.  Where splunktcpin queue (name=splunktcpin) shows current_size, largest_size, smallest_size has same value( but parsingqueue to indexqueue none blocked), TcpInputProcessor fails to drain splunktcpin queue despite parsingqueue or indexqueue are empty.    02-18-2024 00:54:28.370 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1507, largest_size=1507, smallest_size=1507 02-18-2024 00:54:28.370 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=7, current_size=40, largest_size=40, smallest_size=0 02-18-2024 00:54:28.368 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1148, largest_size=1148, smallest_size=1148 02-18-2024 00:54:28.368 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=7, current_size=40, largest_size=40, smallest_size=0 02-18-2024 00:53:57.364 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1507, largest_size=1507, smallest_size=1507 02-18-2024 00:53:57.364 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:53:57.362 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1148, largest_size=1148, smallest_size=1148 02-18-2024 00:53:57.362 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:53:26.372 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1507, largest_size=1507, smallest_size=1507 02-18-2024 00:53:26.372 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:53:26.370 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1148, largest_size=1148, smallest_size=1148 02-18-2024 00:53:26.370 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:52:55.371 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1507, largest_size=1507, smallest_size=0 02-18-2024 00:52:55.371 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:52:55.369 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1148, largest_size=1148, smallest_size=0 02-18-2024 00:52:55.369 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:52:24.397 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=splunktcpin, max_size_kb=500, current_size_kb=0, current_size=0, largest_size=30, smallest_size=0 02-18-2024 00:52:24.396 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:52:24.380 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=splunktcpin, max_size_kb=500, current_size_kb=0, current_size=0, largest_size=16, smallest_size=0 02-18-2024 00:52:24.380 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0   During graceful shutdown pipeline processors are expected to drain the queue. This issue is fixed in 9.2.1 and 9.1.4. 
Hi  I am not sure about this value risk score.  How do i create dashboard tile for this fields  
Currently, I have a field called pluginText which is the following (italicized words are anonymized to what they represent): <plugin_output> The following software are installed on the remote host:... See more...
Currently, I have a field called pluginText which is the following (italicized words are anonymized to what they represent): <plugin_output> The following software are installed on the remote host: Vendor Software  [version versionnumber] [installed on date] ... ... ... </plugin_output> I wish to extract out Vendor, Software and versionnumber to separate fields and require a rex to do so. I am unfamiliar with using rex on this type of list, so I was hoping someone could point me in the right direction
I am trying to join two searches together to table the combined results by host. First search below is showing number of events in the last hour by host, index, and sourcetype: | tstats count whe... See more...
I am trying to join two searches together to table the combined results by host. First search below is showing number of events in the last hour by host, index, and sourcetype: | tstats count where index=* by host, index, sourcetype | addtotals | sort -Total | fields - Total | rename count as events_latest_hour Second search is showing the ingest per hour in GB by host.  (index=_internal host=splunk_shc source=*license_usage.log* type=Usage) | stats sum(b) as Usage by h | eval Usage=round(Usage/1024/1024/1024,2) | rename h as host, Usage as usage_lastest_hour | addtotals | sort -Total | fields - Total Can you please help with how i would join these two searches together to display the host, index, sourcetype, events_latest_hour,  usage_lastest_hour Basically i want to table the results of the first search and also include the results "usage_lastest_hour"from the second search into the table.   
Hi All, I have setup the Object and event input configuration in the salesforce TA, I am able to see the object logs but unable to see the event logs in splunk cloud.   Any directions of triaging ... See more...
Hi All, I have setup the Object and event input configuration in the salesforce TA, I am able to see the object logs but unable to see the event logs in splunk cloud.   Any directions of triaging the issue? Appropriate permissions are provided for the salesforce user.
I haven't found a definitive answer in any of the docs yet.  Is it possible to utilize Splunk Smartstore when everything is in Splunk Cloud and we do not have an on-prem Enterprise?
I have a timestamp with this format "2024-01-01T20:00:00.190000000Z" I can convert this to normal format using rex, however, I want to know is there a alternative to convert to normal time format?
Hi. I'm trying to use the subsearch, but I'm not what I am doing wrong. First the inner search is a list of account like this one. index=main sourcetype=vpacmanagement |eval DateStamp3= strptime(D... See more...
Hi. I'm trying to use the subsearch, but I'm not what I am doing wrong. First the inner search is a list of account like this one. index=main sourcetype=vpacmanagement |eval DateStamp3= strptime(DateStamp, "%Y-%m-%d %H:%M:%S") | eval MemberName2 = split(TeamMember, "\\") | eval Member2 = mvindex(MemberName2,1) | eval Member2=upper(Member2) | where DateStamp3 > relative_time(now(), "-4d") AND like(Status, "%/%/%") AND Member2 = "ADMMICHAEL_HAYES3" |dedup WONumber | rename Member2 as Member | fields Member I get one account, all ok so far. But using the search in an outer search. index=main sourcetype=vpacmanagement|join Member[search index=main sourcetype=vpacmanagement |eval DateStamp3= strptime(DateStamp, "%Y-%m-%d %H:%M:%S") | eval MemberName2 = split(TeamMember, "\\") | eval Member2 = mvindex(MemberName2,1) | eval Member2=upper(Member2) | where DateStamp3 > relative_time(now(), "-4d") AND like(Status, "%/%/%") AND Member2 = "ADMMICHAEL_HAYES3" |dedup WONumber | rename Member2 as Member | fields Member] | eval DateStamp2= strptime(DateStamp, "%Y-%m-%d %H:%M:%S") | eval month = strftime(DateStamp2, "%m") | eval year = strftime(DateStamp2, "%Y") | eval GroupName = split(DomainGroup, "\\"), MemberName = split(TeamMember, "\\") | eval Name = mvindex(GroupName,1), Member = mvindex(MemberName,1) | eval RequestType = upper(RequestType), Name = upper(Name), Member=upper(Member) | where not like(Status, "%/%/%") and DateStamp2 > relative_time(now(), "-2d") |dedup RequestType,DomainGroup, TeamMember | fields WONumber, DateStamp, ResourceSteward, RequestType, Name, Member, Status | table WONumber, DateStamp, ResourceSteward, RequestType, Name,Member, Status | sort DateStamp2   If you see I made some calculation and I'm using Member field as value to make the join, but still is not getting any account from the outer, and in fact the element exists in the outer search, does anyone knows what am I missing? Thanks  
Where is the web server actually installed to and ran from for SOAR in a RHEL environment? Unlike Splunk Web UI where I can modify the web.conf file, for SOAR I only see a massive amount of py files ... See more...
Where is the web server actually installed to and ran from for SOAR in a RHEL environment? Unlike Splunk Web UI where I can modify the web.conf file, for SOAR I only see a massive amount of py files everywhere. I need to figure out where it actually starts and sets it's paths. Specifically where SSL is chosen. Assume I have installed SOAR to /data   Thanks for any assistance!
I have an alert based on the below search (obfuscated):   ... | eval APPDIR=source | rex field=APPDIR mode=sed "s|/logs\/.*||g" | eventstats values(APPDIR) as APPDIRS | eval Level=if("/app/5000" IN... See more...
I have an alert based on the below search (obfuscated):   ... | eval APPDIR=source | rex field=APPDIR mode=sed "s|/logs\/.*||g" | eventstats values(APPDIR) as APPDIRS | eval Level=if("/app/5000" IN (APPDIRS), "PRODUCTION", "Non-production") | eval APPDIRS=mvjoin(APPDIRS, ",")   The idea is to discern the affected application-instance (there are multiple logs under each of the /app/instance/logs/) and then to determine, whether the instance is a production one or not. In the search-results all three new fields (APPDIR, APPDIRS, and Level) are populated as expected. But they don't show up in the e-mails. The "Subject: $Level$ app in $APPDIRS$" expands to mere "Subject:  app in ". Nor are the fields expanded in the body of the alert e-mail. Now, I understand, that event-specific fields -- like the singular APPDIR above -- cannot be expected to work in an alert. But the plural APPDIRS, as well as the Level, are aggregates, aren't they? What am I doing wrong, and how do I fix it?