All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi if I understood right you issue, you have used too much of SPLUNK_DB space on your system. There are many instructions  on net, how to move/change SPLUNK_DB into another Drive on Windows which yo... See more...
Hi if I understood right you issue, you have used too much of SPLUNK_DB space on your system. There are many instructions  on net, how to move/change SPLUNK_DB into another Drive on Windows which you need to do. Another option is limit your disk space to lower, but probably this will denied to collect enough events to your system. Another option is ask some local Splunk Partner to fix this for you. r. Ismo
That worked! but im trying to color the words " Large Effect " and "No"  to red not sure what I did wrong here? <colorPalette type="expression">case (match(value,"Large Effect") OR (match(value,... See more...
That worked! but im trying to color the words " Large Effect " and "No"  to red not sure what I did wrong here? <colorPalette type="expression">case (match(value,"Large Effect") OR (match(value,"No"),"#ff0000", match(value,"Medium Effect"), "#ffff00",match(value,"Small Effect"),"#00ff00",true(),"#ffffff")</colorPalette>
Thanks @KendallW  We are using automatic header-based field extraction for our JSON documents as described here: https://docs.splunk.com/Documentation/Splunk/9.3.0/Data/Extractfieldsfromfileswithst... See more...
Thanks @KendallW  We are using automatic header-based field extraction for our JSON documents as described here: https://docs.splunk.com/Documentation/Splunk/9.3.0/Data/Extractfieldsfromfileswithstructureddata#Use_configuration_files_to_enable_automatic_header-based_field_extraction So when fields like serviceType are present in the events I believe they are getting indexed. You are correct that serviceType is completely missing from the events for time ranges where my tstats query returns 0, but I don't understand why it would work this way, especially since adding a wildcard to either OR value returns the correct non-zero result: |tstats count where index="my_index" eventOrigin="api" (accountId="8674756857*" OR serviceType="unmanaged") |tstats count where index="my_index" eventOrigin="api" (accountId="8674756857" OR serviceType="unmanaged*") This really seems like a bug. Thanks for the tip about |append, though this is very undesirable and possibly infeasible for us considering the complexity of our queries and how many we have.
You will need to mvexpand the field for that you can colour rows of the field
?
It is an old thread but I feel with more experience under my belt I should fix the solution. Map is hardly ever _the_ solution so in this case it isn't either. It's better to count and filter. inde... See more...
It is an old thread but I feel with more experience under my belt I should fix the solution. Map is hardly ever _the_ solution so in this case it isn't either. It's better to count and filter. index=logins | streamstats count by login | where count<=5 And that's it.
Thanks for your response @scelikok . We are using a custom sourcetype for our events which is configured in inputs.conf and props.conf to extract the fields from the events as JSON, so based on my re... See more...
Thanks for your response @scelikok . We are using a custom sourcetype for our events which is configured in inputs.conf and props.conf to extract the fields from the events as JSON, so based on my reading of this documentation I think all the fields from the JSON are getting indexed, including serviceType: https://docs.splunk.com/Documentation/Splunk/9.3.0/Data/Extractfieldsfromfileswithstructureddata#Use_configuration_files_to_enable_automatic_header-based_field_extraction We are able to use tstats to query on any of our JSON fields that I've tried, for example returns the correct non-zero count: |tstats count where index="my_index" serviceType="unmanaged" I tried your tip about using TERM(), but I don't think it applies here. As I mentioned, I can use a wildcard on the serviceType value even without TERM() and it works as expected, even when there are no events containing serviceType: |tstats count where index="my_index" eventOrigin="api" (accountId="8674756857" OR serviceType="unmana*")  
Hello! How are you? We are currently working on an integration with Splunk Cloud to be able to retrieve a set of data that we persist in an index and then we search to generate a table. For this, ... See more...
Hello! How are you? We are currently working on an integration with Splunk Cloud to be able to retrieve a set of data that we persist in an index and then we search to generate a table. For this, we need to use the Splunk Cloud API from another of our developments. We generate a new local user on the platform and assign it a new role with 'search' permission on the index we need to query. Then, we perform this test call from our computer: curl -v -u username:p455w0rd -k https://<organization>.splunkcloud.com:8089/services/search/jobs -d search='index="index_to_query" rule="inventory" | stats count by rawData.Association.asset | sort - count' but the response we get is as follows: * Trying <IP>:8089.... * connect to<IP> port 8089 failed: Operation timed out * Failed to connect to<organization>.splunkcloud.com port 8089 after 75195 ms: Couldn't connect to server We investigated in Splunk forums and found that it could be caused by a Splunk Cloud restriction, and that apparently we could solve it by adding the subnets from where we do the consumption in: https://<organization>.splunkcloud.com/en-GB/manager/system/manage_system_config/ip_allow_list We tried that but we're getting the same error message Have you faced this in the past? Thank you very much! Regards, Juanma
The diskspace remaining=6235 has breached the yellow threshold for filesystems=['C:\Program Files\Splunk\var\lib\splunk\_introspection\db' 'C:\Program Files\Splunk\var\lib\splunk\_internaldb\db' 'C:\... See more...
The diskspace remaining=6235 has breached the yellow threshold for filesystems=['C:\Program Files\Splunk\var\lib\splunk\_introspection\db' 'C:\Program Files\Splunk\var\lib\splunk\_internaldb\db' 'C:\Program Files\Splunk\var\lib\splunk\audit\db' 'C:\Program Files\Splunk\var\lib\splunk\_configtracker\db' 'C:\Program Files\Splunk\var\lib\splunk\audit\colddb' 'C:\Program Files\Splunk\var\lib\splunk\_metrics\db' 'C:\Program Files\Splunk\var\lib\splunk\_configtracker\colddb']   Hi, I am new to Splunk, I am using Splunk enterprise on my laptop, and I have been getting this error about disk space, please help me to solve it 
Hi  Can anyone help me with this solution how to do the power BI logs analysis in Splunk. I just want to integrate Power Bi with splunk and then in splunk index want to check logs for Power Bi App w... See more...
Hi  Can anyone help me with this solution how to do the power BI logs analysis in Splunk. I just want to integrate Power Bi with splunk and then in splunk index want to check logs for Power Bi App who is logged in etc. Anyone knows what to do in this case 
This is what I used and after applying  the results just highlights the entire mv field in red <format type="color"> <colorPalette type="expression"> case (match(value,"Large Effect"), "#ff0000"... See more...
This is what I used and after applying  the results just highlights the entire mv field in red <format type="color"> <colorPalette type="expression"> case (match(value,"Large Effect"), "#ff0000",match(value,"Medium Effect"), "#ffff00",match(value,"Small Effect"),"#00ff00",true(),"#ffffff")</colorPalette> </format> looking for Small effect -> Green Medium effect -> Orange and Large effect -> Red Continuing from this search: @ITWhisperer  https://community.splunk.com/t5/Splunk-Search/How-to-extract-a-csv-data-fields-message-data-into-fields/m-p/695151#M236406
Hi @KendallW  I tried your suggestion with real data (100k rows  CSV).  I expected the left side as a baseline and the number of rows will remain the same ,   but the number of rows doesn't match ... See more...
Hi @KendallW  I tried your suggestion with real data (100k rows  CSV).  I expected the left side as a baseline and the number of rows will remain the same ,   but the number of rows doesn't match the CSV.  In my initial example. My expected output is 4 rows, same number of rows in the host.CSV. Please suggest.  I appreciate your help. Thank you. My expected output - yellow and green circle  ip_address host owner 10.1.1.1 host1   10.1.1.2 host2   10.1.1.3 host3 owner3 10.1.1.4 host4 owner4        
Hi @yuanliu  Thanks for your help. It's been a while since the last time I saw you, I hope you're doing fine. 1)  I ran the emulation and the result included 10.1.1.5 from the index=owner (righ... See more...
Hi @yuanliu  Thanks for your help. It's been a while since the last time I saw you, I hope you're doing fine. 1)  I ran the emulation and the result included 10.1.1.5 from the index=owner (right join).  My expected output is host.csv plus the owner data from index=owner only if the IP matches.  (See below) 2)  When I tested with real data (100k rows  CSV), the result reduced to 30k rows. I expected the left side as a baseline and the number of rows will remain the same.    Should I flip the logic?   (inputlookup first, then append the index) I actually tried to flip the logic, but after the append, the number of rows still doesn't match the CSV. (it's close) I appreciate your help. Thank you Emulation result My expected output - yellow and green circle  ip_address host owner 10.1.1.1 host1   10.1.1.2 host2   10.1.1.3 host3 owner3 10.1.1.4 host4 owner4    
Requires: run_msearch capability Doc reference: https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/SearchReference/Mpreview
Hello Splunkers, For an unknown reason, I'm not able to open a case so I'm describing the issue here today. I'm using the sim_forwarder_assets in my company to monitor the possible loss of UF & H... See more...
Hello Splunkers, For an unknown reason, I'm not able to open a case so I'm describing the issue here today. I'm using the sim_forwarder_assets in my company to monitor the possible loss of UF & HF, it's working well and is easy to maintain. Today Splunk introduces an update (3.28.0) : https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/ReleaseNotes/CloudMonitoringConsole And since this update (around 2PM UTC), I'm experiencing a lot of HF & UF marked as "missing" but in fact they are not and they are working perfectly well.  So I dig in the scheduled search responsible of the build & maintain of the sim_forwarder_assets and  found the issue. Schedule Search Name : SIM SS - Forwarder Build Asset Table  The following macro at the top is the following : `sim_build_forwarder_assets(1m)` index=_internal sourcetype=splunkd TERM(group=tcpin_connections) TERM("cooked") OR TERM("cookedSSL") (hostname!=*.splunk*.*) fwdType!="edge" | lookup reserved-cidrs cidr_block AS sourceIp OUTPUT cidr_source | where isnull(cidr_source) So Splunk is searching for hosts that are not in the lookup reserved-cidrs, This lookups contains some of well-known multicast address and some reserved subnets over internet but it also contains some GCP IPs shared with customer :  34.0.0.0/15 gcp Public IP addresses for Google APIs and services including Google Cloud netblocks. However in my case and I believe in other customers too some on my HFs are presented on Splunk Cloud as with GCP Public IPs (34.X & 35.X) As most of the GCP supernets are present in this lookup, my HF are declared as "missing" due to that. I assume that Splunk should not include this kind of supernet in the lookup. I do not know if everything is clear and I'm sorry for my poor english :).
By default when you install SOAR, it should use a "SOAR user" which will then be automatically given a cronjob with "@restart", thereby starting SOAR automatically when you restart the server.
When we attempted to upgrade Splunk Universal Forwarder on our windows servers, they repeatedly failed.  Here is a windows script that solved the issue below.  The issue was cause by the registry key... See more...
When we attempted to upgrade Splunk Universal Forwarder on our windows servers, they repeatedly failed.  Here is a windows script that solved the issue below.  The issue was cause by the registry keys that remained after uninstalling the previous version of splunk: REM Set current directory cd %~dp0 REM Uninstall any existing Splunk versions - if no versions are present, the script will continue wmic product where "name like '%%universalforwarder%%'" call uninstall REM Remove Splunk installation reg keys reg delete HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Installer\Products\771F943D524B4D44EB7F87D16BBECDE4 /f reg delete HKEY_CLASSES_ROOT\Installer\Products\771F943D524B4D44EB7F87D16BBECDE4 /f reg delete HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Installer\Features\771F943D524B4D44EB7F87D16BBECDE4 /f reg delele "HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Installer\UpgradeCodes\13631B46466632F4FA2E89CF8E9602DB" /f reg delete "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\RADAR\HeapLeakDetection\DiagnosedApplications\splunkd.exe" /f reg delete "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\RADAR\HeapLeakDetection\DiagnosedApplications\splunk-winevtlog.exe" /f reg delete "HKEY_CLASSES_ROOT\Installer\Products\771F943D524B4D44EB7F87D16BBECDE4" /f reg delele "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SMS\Mobile Client\Software Distribution\Execution History\System\SV100012\79ceb0e4-9f86-11ee-a216-000d3ac2f180" /f reg delele "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SMS\Mobile Client\Software Distribution\Execution History\System\SV100018\38c9b010-d5c3-11ee-a218-000d3ac2f180" /f reg delete "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SMS\Mobile Client\Software Distribution\Execution History\System\SV100019\affe40ec-d660-11ee-a218-000d3ac2f180" /f reg delete "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SMS\Mobile Client\Software Distribution\Execution History\System\SV10001F\c19eef51-4dd7-11ef-a21c-000d3ac2f180" /f reg delete "HKLM:\SOFTWARE\Classes\Installer\Features\B0271F4D65C5D084FA81634DC56AD4AE" /f reg delete "HKLM:\SOFTWARE\Classes\Installer\UpgradeCode\13631B46466632F4FA2E89CF8E9602DB" /f reg delete "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\Folders\" /v "C:\Program Files\SplunkUniversalForwarder\" /f reg delete "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps\splunk-perfmon.exe" /f reg delete "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\S-1-5-80-972488765-139171986-783781252-3188962990-3730692313" /f reg delete "HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Class\{775313CB-929D-442C-8B52-2ED391D816E2}" /f REM Install Splunk - set to passive to allow installer to see progress or errors msiexec.exe /i splunkforwarder-9.2.2-d76edf6f0a15-x64-release.msi SPLUNKUSERNAME="SplunkUser" SPLUNKPASSWORD="password" DEPLOYMENT_SERVER="USW-SPLUNKDPL-1:8089" AGREETOLICENSE=yes /passive REM Start Splunk Service net start SplunkForwarder REM Install SPL file "c:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" install app splunkclouduf.spl -auth SplunkAdmin:pasword REM Stop and start Splunk service to enable SPL net stop SplunkForwarder net start SplunkForwarder exit
cd /opt/phantom/bin sudo su phantom ./start_phantom.sh
You may want to increase maxQueueSize in outputs.conf. The logs is indicating blocking due to low tcpout queue size or target is causing back-pressure..
The log is still there. But instead of huge number   Current dest host connection is using 18446603427033668018 bytes   the fix logs the correct number.  See https://community.splunk.com/t... See more...
The log is still there. But instead of huge number   Current dest host connection is using 18446603427033668018 bytes   the fix logs the correct number.  See https://community.splunk.com/t5/Getting-Data-In/Current-dest-host-connection-is-using-18446603427033668018-bytes/m-p/695446/highlight/true#M115476