All Topics

Top

All Topics

Hello splunkers! Has anyone else experienced slow performance with Splunk Enterprise Security? For me, when I open the "Content Management" in  "Configure" and let's say try to filter to see enabl... See more...
Hello splunkers! Has anyone else experienced slow performance with Splunk Enterprise Security? For me, when I open the "Content Management" in  "Configure" and let's say try to filter to see enabled correlation searches, it might take up to 5 minutes to load just 5 or 6 correlation searches. However, if I try to perform a search in search and reporting (Within Enterprise Security) the searches will run pretty much fast, returning hunderds of thousands of events. Another case where I might experience huge lags is when: creating a new investigation, updating the status of the notable, deleting investigation, opening Incident review settings, adding new note in investigation. If anyone had similar experience could someone please share how to improve the performance in Enterprise Security app? Some notes to give more info about my case: - The health circle is green.  - The deployment is all-in-one (Splunk Enterprise, ES, and all the apps and add-ons, everything is running on ubuntu server 20.04 virtual machine with 42 GB RAM, 200 GB hard disk (thin provisioned), 32 vCPU - My Splunk deployment has around 4-5 sources from which it receives the logs, average load of data is around 500-700 MB/day Thanks for taking your time reading  and replying to my post
Hello! Dark mode still does not work in Splunk Enterprise 9.2.1 when an emoji is in one of the visualizations, like a single for example. Here is run anywhere dashboard.  Just set it do dark mode a... See more...
Hello! Dark mode still does not work in Splunk Enterprise 9.2.1 when an emoji is in one of the visualizations, like a single for example. Here is run anywhere dashboard.  Just set it do dark mode and it stops working.  Remove the pizza and it works again.  If you are in dark mode already and add the emoji then after initial save it will work, but after refreshing it reverts to light.  If you don't like pizza then add an emoji of choice.     <dashboard version="1.1" theme="light"> <label>pizza dark test</label> <row> <panel> <single> <search> <query>| makeresults | eval emoji="ciao " | table emoji</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="colorBy">value</option> <option name="colorMode">none</option> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x53a051", "0x0877a6", "0xf8be34", "0xf1813f", "0xdc4e41"]</option> <option name="rangeValues">[0,30,70,100]</option> <option name="refresh.display">progressbar</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">0</option> <option name="useThousandSeparators">1</option> </single> </panel> </row> </dashboard>       Thanks! Andrew
https://docs.splunk.com/Documentation/Splunk/9.2.1/ReleaseNotes/Fixedissues https://docs.splunk.com/Documentation/Splunk/9.1.4/ReleaseNotes/Fixedissues One  customer reported a very interesting i... See more...
https://docs.splunk.com/Documentation/Splunk/9.2.1/ReleaseNotes/Fixedissues https://docs.splunk.com/Documentation/Splunk/9.1.4/ReleaseNotes/Fixedissues One  customer reported a very interesting issue with graceful splunk restart. Event missing during a graceful restart/rolling restart(splunk stop gracefully finished). useACK=true is an option but that ideally must be applied if splunk stop timed-out. This has been an issue for so many years. This is important where config changes are pushed frequently, thus triggering frequent indexer/HF/IF restart. The issue is fixed by 9.1.4/9.2.1   TcpInputProcessor not able to drain splunktcpin queue during graceful shutdown   How to detect if it's applicable for your deployment? Check splunkd.log for  WARN TcpInputProc - Could not process data received from network. Aborting due to shutdown Also from metrics.log see https://community.splunk.com/t5/Knowledge-Management/During-indexer-restart-indexer-cluster-rolling-restart/m-p/683763#M9962
I tried to configure SSL/TSL connection between Forwarder and Indexer.  On forwarder /opt/splunkforwarder/etc/system/local/output.conf:     [tcpout] defaultGroup = default-autolb-group [tcpout... See more...
I tried to configure SSL/TSL connection between Forwarder and Indexer.  On forwarder /opt/splunkforwarder/etc/system/local/output.conf:     [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] disabled = false server = my.domain.com:9998 disabled = 0 clientCert = /opt/splunk/etc/auth/mycerts/client.pem useClientSSLCompression = true [tcpout-server://my.domain.com:9998]     Certificate  has been created by Certbot and prepared according to the instructions.  Works well for Splunk Web and I believe it works here too. On indexer /opt/splunk/etc/system/local/inputs.conf     [splunktcp-ssl:9998] disabled=0 [SSL] serverCert = /opt/splunk/etc/auth/mycerts/test_full.pem   test_full.pem - prepared certificate from Certbot. If I use forwarder without certificates everything works fine so there is no connection errors. Output of splunk list forward-server   Configured but inactive forwards: my.domain.com:9998     From  /var/log/splunk/splunkd.log I can see the following error:   05-22-2024 11:51:03.823 +0000 ERROR TcpOutputFd [29087 TcpOutEloop] - Read error. Connection reset by peer 05-22-2024 11:51:03.823 +0000 WARN AutoLoadBalancedConnectionStrategy [29087 TcpOutEloop] - Applying quarantine to ip=99.99.99.99 port=9998 connid=2 _numberOfFailures=2   Could you please help me debug the problem?  
my search as below, the two <my search command for list user rating list> search command is the same, how to reduce this search command. I want to use once time <my search command for list user rat... See more...
my search as below, the two <my search command for list user rating list> search command is the same, how to reduce this search command. I want to use once time <my search command for list user rating list>, mean share the same search results for queries. The transaction sellerId and buyerId could look up user of rating list to get the rating data. <my search command for transaction records> | dedup orderId | table orderId, sellerId, buyerId | join type=left sellerId [ search <my search command for list user rating list> | table sellerId, sellerRating] | search orderId!="" | table orderId, sellerId, buyerId, sellerRating | join type=left buyerId [ search <my search command for list user rating list> | table buyerId, buyerRating] | search orderId!="" | table orderId, sellerId, buyerId, sellerRating,buyerRating transaction records maybe like as below orderId sellerId buyerId 123 John Marry 456 Alex Josh   user rating (all user) user rating Josh 10 Alex -2 Lisa 1 Marry 3 John 0 Tim 0   excepted result orderId sellerId buyerId sellerRating buyerRating 123 John Marry 0 3 456 Alex Josh -2 10
Hi, I tried to add a piece of code to change the color of values based on certain condition, but it is not reflecting the change in my dashboard. Can you please check & advise what is going wrong? ... See more...
Hi, I tried to add a piece of code to change the color of values based on certain condition, but it is not reflecting the change in my dashboard. Can you please check & advise what is going wrong? New code added - <single id="CurrentUtilisation"> <search> <query> <![CDATA[ index=usage_index_summary | fields Index as sourceIndex, totalRawSizeGB | where Index="$single_index_name$" | stats latest(totalRawSizeGB) as CurrentSize by Index | join left=L right=R where L.Index=R.extracted_Index [ search index=index_configured_limits_summary | stats latest(maxGlobalDataSizeGB) as MaxSizeGB by extracted_Index ] | rename L.CurrentSize as CurrentSizeGB, R.MaxSizeGB as MaxSizeGB, L.Index as Index | eval unit_label = if(CurrentSizeGB < 1, "MB", "GB") | eval CurrentSizeGB = if(CurrentSizeGB < 1, CurrentSizeGB*1024, CurrentSizeGB) | eval CurrentSizeDisplay = round(CurrentSizeGB) . if(unit_label == "MB", "MB", "GB") | eval CurrentSizeDisplay = if(CurrentSizeGB == 0, "None", CurrentSizeDisplay) | eval range=if(CurrentSizeGB > MaxSizeGB, "over", "under") | table CurrentSizeDisplay, range ]]> </query> </search> <option name="colorBy">value</option> <option name="drilldown">none</option> <option name="rangeColors">["red", "white"]</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="underLabel">Current Utilisation</option> <option name="useColors">1</option> </single> What I want - If Currentsize > Maxsize then the value should display in Red else White. The query on being run independently is showing correct results for the range & current size maxsize values but the color does not change in the dashboard. I have looked up this in the community & tried using the same logic mentioned in this successful solution but to no avail.   Reference used - https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcommunity.splunk.com%2Ft5%2FDashboards-Visualizations%2FHow-can-I-change-Splunk-Dashboard-single-value-field-color-of%2Ftd-p%2F596833&data=05%7C02%7Csaleha.shaikh%40here.com%7C8e67306234504904e1c008dc7a4ac122%7C6d4034cd72254f72b85391feaea64919%7C0%7C0%7C638519708691080704%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=iJit6osuY09q25VX8pWiUcuylKtrNczG4H%2BhCfSgEbo%3D&reserved=0
After upgrade Azure blob storage archiving to 1.1.1 we have ERROR BucketMover :   10:16:29.231 +0000 ERROR BucketMover [15786 FilesystemOpExecutorWorker-1] - coldToFrozenScript cmd='"/usr/bin/pyt... See more...
After upgrade Azure blob storage archiving to 1.1.1 we have ERROR BucketMover :   10:16:29.231 +0000 ERROR BucketMover [15786 FilesystemOpExecutorWorker-1] - coldToFrozenScript cmd='"/usr/bin/python3" "/opt/splunk/etc/apps/TA-azure-blob-archiving/bin/AzFrozen2Blob.py" /mnt/data1/splunkdata/network/db/db_1708118969_1708333374_3431' exited with non-zero status='PID 15806 exited with code 1'
hello,   i am new in splunk. i can not understand if i not indexed data in can i search this data in Splunk? or only indexed data can i search in Splunk?
Hello, I have an alert setup which reads a lookup file (populated by another report) and if there are any records in the lookup file, emails should be triggered (one for each record).   I understand... See more...
Hello, I have an alert setup which reads a lookup file (populated by another report) and if there are any records in the lookup file, emails should be triggered (one for each record).   I understand this can be done using trigger "for each result" but I want to use some field values from each record and use it as an email subject. Example: in this case, I want 6 emails to be triggered with subject lines as, Email 1: Selfheal Alert - Cust A - Tomcat Stopped - Device A1- May-24 - Device Level Email 2: Selfheal Alert - Cust A - Tomcat Stopped - Device A2- May-24 - Device Level Email 3: Selfheal Alert - Cust B - Failed Job - Device B1- May-24 - Device Level Email 4: Selfheal Alert - Cust C - Tomcat Stopped - Device C1- May-24 - Device Level Email 5: Selfheal Alert - Cust C - Failed Job- Device C2- May-24 - Device Level Email 6: Selfheal Alert - Cust C - Failed Job - Device C3- May-24 - Device Level How can I achieve this? Thank you.
Dear All, I need help in integration an Openshift with our Splunk Enterprise I have  integrated Openshift with Splunk using HEC and the connection is successfully paired and when the test message w... See more...
Dear All, I need help in integration an Openshift with our Splunk Enterprise I have  integrated Openshift with Splunk using HEC and the connection is successfully paired and when the test message was sent from an Openshift we received on Splunk but we don't receive the logs constantly. We are able to see only test logs and after that there are no logs floating to Splunk. Can someone please guide me here.
Please tell me about the lookup operation. 1. when you register a new lookup table file (CSV) from the GUI, you can immediately refer to it on the search screen.    | inputlookup “lookup.csv”   ... See more...
Please tell me about the lookup operation. 1. when you register a new lookup table file (CSV) from the GUI, you can immediately refer to it on the search screen.    | inputlookup “lookup.csv”   However, it does not appear in the list of files in the “Lookup File” pull-down on the next Create New Lookup Definition screen. It takes time to set up because it appears after more than one day each time. Is this due to a limitation caused by the specifications? If you know the cause, please let us know.    2. no lookup The following CSV file is registered, and lookup definitions and automatic definitions are also set. 【lookup.csv】   PC_Name | Status | MacAddr1 | MacAddr2 ------------------------------------------------------------ PC_Name1 | Used | aa:bb:cc... | zz:yy:xx... PC_Name2 | Used | aa:bb:cc... | zz:yy:xx... PC_Name3 | Used | aa:bb:cc... | zz:yy:xx...   *MacAddr1 and MacAddr2 by Ethernet and WiFi Address, I want to refer to MacAddr2 as a key. The following fields are output in the target index log CL_MacAddr as defined in the calculated field I would like to reference the Mac address of this CL_MacAddr from lookup.csv and output PC_Name and Status as fields, but it is not working. For example, when I enter the following in the search screen, only the existing fields appear, not PC_Name, Status, etc. index=“nc-wlx402” sourcetype=“NC-WIFI-3” | lookup “lookup.csv” MACAddr2 AS CL_MacAddr OutputNew   However, another lookup definition is available for the same index and source type (automatic definition setting, confirmed operation). I'm assuming this is due to something basic... please help me
index="xyz" sourcetype = abc" | search Country="ggg" statusCode=200 | stats count as Registration | where Registration =0 Could you please help me to modify this query. Time period is last 24 ... See more...
index="xyz" sourcetype = abc" | search Country="ggg" statusCode=200 | stats count as Registration | where Registration =0 Could you please help me to modify this query. Time period is last 24 hours. 
Hi Team, I have an auto-extracted field - auth.policies{} I have another field called user  Whenever auth.policies{} is root, I need that to be a part of user field May I know how to do it? Is th... See more...
Hi Team, I have an auto-extracted field - auth.policies{} I have another field called user  Whenever auth.policies{} is root, I need that to be a part of user field May I know how to do it? Is there a possibility to use case and coalesce together?
We currently have a Splunk Enterprise cluster that uses SmartStore in AWS S3. We're looking to move the cluster to an entirely new AWS account.  However, we are not sure of the best way to move the c... See more...
We currently have a Splunk Enterprise cluster that uses SmartStore in AWS S3. We're looking to move the cluster to an entirely new AWS account.  However, we are not sure of the best way to move the contents of the SmartStore without corrupting any of the files that have been indexed.  What would be the best way to migrate from one SmartStore backend to another SmartStore backend without losing any data? 
Hi all, we have around 8 dashboards fetching data from same index.  There are around 150 hosts with this index, but we don't want to see the data from particular 50 hosts in a dashboard.  how this ca... See more...
Hi all, we have around 8 dashboards fetching data from same index.  There are around 150 hosts with this index, but we don't want to see the data from particular 50 hosts in a dashboard.  how this can be done??? Any inputs on this please???  
Logs are in JSON format and we want to get this attribute.app.servicecode field values as a drop down in classic dashboard. Query: index=application-idx  |stats count by attribute.app.servicecode H... See more...
Logs are in JSON format and we want to get this attribute.app.servicecode field values as a drop down in classic dashboard. Query: index=application-idx  |stats count by attribute.app.servicecode How to get this field values in a drop down????
https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Data/Usepersistentqueues Persistent queuing is available for certain types of inputs, but not all. One major limitation with persistent ... See more...
https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Data/Usepersistentqueues Persistent queuing is available for certain types of inputs, but not all. One major limitation with persistent queue at inputs,  enabled on certain UF/HF/IHF/IUF inputs, if downstream parsingqueue/indexqueue/tcpoutqueue are blocked/saturated and a DS bundle push triggers splunk restart, events will be dropped since UF/HF/IHF/IUF failed to drain queues. On windows DC, persistent queuing is enabled for windows modular inputs, DS bundle push triggers DC restart and still windows modular input events in parsingqueue/tcpoutqueue will be dropped. On windows DC, some windows event (event occurred while the workstation was being shut down ) logs are always lost. When Laptops are off the network and restarted/shutdown, in-memory queue events are dropped.  With PQ at inputs, during splunk restart on forwarding tier, still splunk in-memory queued events might get dropped.  Typical steps for laptop where events are always lost. 1. Splunk is installed on a Windows Laptop 2. Put the laptop to Sleep 3. The Splunk service will stop, then 4. There will be 1 or 2 Windows events such as 4634-Session_Destroyed. 5. Later the Laptop "wakes up" and there will be 1 or 2 events generated such as 4624-Login 6. Then Splunk service start. 7. The events that were created when sleep started and when sleep ended were not ingested.
I want to do some analysis on "status" below but having a hard time getting to "status". I start with: | spath path=log.content | table log.content but that only gives me the json array from cont... See more...
I want to do some analysis on "status" below but having a hard time getting to "status". I start with: | spath path=log.content | table log.content but that only gives me the json array from content. I've tried "spath path=log.content{}" and "spath path=log.content{}.status but it ends up empty. I want to be able to do a ternary operation on "status" like the sample below: | mvexpand log.content{}.status | eval Service=if('log.content{}.status'="CANCELLED", "Cancelled", if('log.content{}.status'="BAY", "Bay", NULL)) | where isnotnull(Service) | stats count by Service  
Hi Every1, Need help on how to integrate solarwinds to splunk cloud  or splunk enterprise ? As I seen addon is not support by splunk support. Suggest best possible ways !!
Hi, I want to display time on my dashboard but all I see just two fields with data any help with the search to populate the rest of the fields would be appreciated. I have attached my dashboard. ... See more...
Hi, I want to display time on my dashboard but all I see just two fields with data any help with the search to populate the rest of the fields would be appreciated. I have attached my dashboard. my search that looks like this: Index=a sourcetype=b earliest=-1d [| inputlookup M003_siem_ass_list where FMA_id=*OS -001* | stats values(ass) as search | eval seaqqrch=mvjoin(search,", OR ")] | fields ip FMA_id _time d_role | stats latest(_time) as _time values(*) by ip