All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hello,  How to change the font size of y-values in a Splunk dashboard barchart?   I try..       <html>        <style>             #rk g[transform] text {             font-size:20px !important... See more...
hello,  How to change the font size of y-values in a Splunk dashboard barchart?   I try..       <html>        <style>             #rk g[transform] text {             font-size:20px !important;             font-weight: bold !important;             }             g.highcharts-axis.highcharts-xaxis text{             font-size:20px !important;             }             g.highcharts-axis.highcharts-yaxis text{             font-size:20px !important;             }         </style> </html>  
Our pro license has been expired and wanted to check on the procedure for the upgraded license file
I've two counter streams, I would like to display that as a percentage as B/(B+C)  in the chart but it always gives me an error.  B = data('prod.metrics.biz.l2_cache_miss', rollup='rate', ext... See more...
I've two counter streams, I would like to display that as a percentage as B/(B+C)  in the chart but it always gives me an error.  B = data('prod.metrics.biz.l2_cache_miss', rollup='rate', extrapolation='zero').publish(label='B') C = data('prod.metrics.biz.l2_cache_hit', rollup='rate', extrapolation='zero').publish(label='C') How can I create a new metrics out of these two to find either cache hit or miss percentage? 
Hi All, I am attempting to use lookup table "is_windows_system_file"  for the following SPL where the Processes.process_name needs to match the filename from the lookup table. Once these results are... See more...
Hi All, I am attempting to use lookup table "is_windows_system_file"  for the following SPL where the Processes.process_name needs to match the filename from the lookup table. Once these results are obtained I then want to attempt to see processes that are not running from C:\Windows\System32 or C:\Windows\SysWOW64    | tstats `summariesonly` count from datamodel=Endpoint.Processes where Processes.process_name=* by Processes.aid Processes.dest Processes.process_name Processes.process _time  
Has anyone tried this integration, I am facing issues while integrating this using this app https://splunkbase.splunk.com/app/6535  . This add-on only pulls the activity once from our TFS server  and... See more...
Has anyone tried this integration, I am facing issues while integrating this using this app https://splunkbase.splunk.com/app/6535  . This add-on only pulls the activity once from our TFS server  and does not pull it continuously at said interval. No errors observed in the internal  logs. Has any one tried using this add-on for this integration? Azure DevOps (Git Activity) - Technical Add-On 
Hey Experts, I am encountering an issue  with using filter tokens in specific row on my dashboard. I have two filters named ABC and DEF, tokens represented for ABC is $abc$ and DEF is $def$.  I... See more...
Hey Experts, I am encountering an issue  with using filter tokens in specific row on my dashboard. I have two filters named ABC and DEF, tokens represented for ABC is $abc$ and DEF is $def$.  I want to pass these tokens only to one specific row, while for others, I want to reject them.  For the rows where i need to pass the tokens, I've used the following syntax:  <row depends="$abc$ $def$"></row> For the row where i don't want to use the token, I've used the following syntax;  <row rejects="$abc$ $def$"></row>. However when i use the rejects condition, the rows are hidden. I want these rows to still be visible. Any help or example queries would be greatly appreciated. Thank You!
Hi All,  @ITWhisperer @renjith_nair @woodcock  From the above "Textbox" input and panel for (_time, EventID, Server, Message, Severity) "Textbox" Settings:             <input type="text... See more...
Hi All,  @ITWhisperer @renjith_nair @woodcock  From the above "Textbox" input and panel for (_time, EventID, Server, Message, Severity) "Textbox" Settings:             <input type="text" token="eventid" searchWhenChanged="true">             <label>Search EventID</label>             </input> When I search in the "Textbox" using an "EventID", it only displays results based on the EventID values. However, when I search using other parameters such as "_time", "Server", "Message", or "Severity", it does not retrieve any results. Can anyone assist me with creating a conditional search for any of the following fields in a above  table: _time, EventID, Server, Message, or Severity? When I search for any value in these fields, I want the corresponding records to be displayed. Either in UI or Source need the settings.  
Hi Team, I want to calculate peak hourly volume of each month for each service. Each service can have different peak times and first need to calculate peak hour of each component for the month. Like... See more...
Hi Team, I want to calculate peak hourly volume of each month for each service. Each service can have different peak times and first need to calculate peak hour of each component for the month. Likewise calculate for last 3 months. Then calculate the average of 3 months peak hourly volume. Below table is the sample requirement.   January-24 February-24 March-24 Avg Volume service1 20 50 20 30 service2 4 3 8 5 service3 20 30 40 30 service4 30000 30000 9000 23000 service5 200 300 400 300
I’m using Splunk Enterprise 9 with Universal Forwarder 9 on Windows. I'd like to monitor several structured log files but only ingest specific lines from these files (basically each line begins with ... See more...
I’m using Splunk Enterprise 9 with Universal Forwarder 9 on Windows. I'd like to monitor several structured log files but only ingest specific lines from these files (basically each line begins with a well-defined string so easy to create matching regular expression or simple match against it). I’m wondering where this can be achieved? Q: Can the UF do this natively or do I need to monitor the file as a whole then drop certain lines at the indexer?
Hello Splunk Community, I'm encountering an issue with the SA-cim_validator add-on where it's returning no results, and I'm hoping someone here can help me troubleshoot this further. Here's what I'... See more...
Hello Splunk Community, I'm encountering an issue with the SA-cim_validator add-on where it's returning no results, and I'm hoping someone here can help me troubleshoot this further. Here's what I've done so far: Confirmed that the app has read access for all users and write access for admin roles. Checked that the configuration files are correctly set up. Splunk Common Information Model (Splunk_SA_CIM) is installed and up to date. Verified that the indexes and sourcetypes specified in the queries are present and contain data. Reviewed time ranges to include periods with log generation. Ensured that data models are accelerated as needed. Looked through Splunk's internal logs for any errors related to the SA-cim_validator but found nothing. Despite these steps, every time I run a search query within the CIM Validator, such as index=fortigate sourcetype=fortigate_utm, it yields no results, regardless of the indexes or targeted data model or search parameters I use. Does anyone have any insights or suggestions on what else I can check or any known issues with the add-on? Any assistance would be greatly appreciated! Thank you, Alex_Mics
What is the version of Python at Splunk 9.2?  We are currently at Splunk 9.1.0.2 and that version of Python (3.7.16) is already EOL.  In our environment, we need to be at a supported version of Python.
Hello, Why does changing addtime=false on scheduled summary index - advanced edit has no effect? Thank you for your help. Advanced Edit:  After the scheduled summary index ran on a sche... See more...
Hello, Why does changing addtime=false on scheduled summary index - advanced edit has no effect? Thank you for your help. Advanced Edit:  After the scheduled summary index ran on a scheduled time,  the latest search on "view recent"  still showed "addtime=t", instead of "addtime=f" addtime=false worked if I ran it manually   
Any reason why this can't be visualized in a geo cluster map? source="udp:514" index="syslog" NOT src_ip IN (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 17.0.0.0/8) action=DROP src_ip!="162.159.192.9... See more...
Any reason why this can't be visualized in a geo cluster map? source="udp:514" index="syslog" NOT src_ip IN (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 17.0.0.0/8) action=DROP src_ip!="162.159.192.9" | iplocation src_ip | geostats count by country      
i have splunk index configured in my openshift cluster as a configmap, now if i change the index on the cluster still my container logs are moving to the old index. is there something i am missing? 
Hello all - Trying to get Azure Event Hub data to flow into Splunk. Having issues configuring it with the add-on for Microsoft Cloud Services. I have configured an app in Azure that has Reader & Ev... See more...
Hello all - Trying to get Azure Event Hub data to flow into Splunk. Having issues configuring it with the add-on for Microsoft Cloud Services. I have configured an app in Azure that has Reader & Event Hub Receiver roles. Event Hub has been configured it receive various audit information. I am trying to configure the input. But receive error message in splunk_ta_microsoft_cloudservices_mscs_azure_event_hub_XYZ.log     Error - 2024-03-08 16:20:31,313 level=ERROR pid=22008 tid=MainThread logger=modular_inputs.mscs_azure_event_hub pos=mscs_azure_event_hub.py:run:939 | datainput="PFG-AzureEventHub1" start_time=1709914805 | message="Error occurred while connecting to eventhub: CBS Token authentication failed. Status code: None Error: client-error CBS Token authentication failed. Status code: None"     I then tried to input the Connection string-primary key in the FQDN space, but receive the below error message. This is occurring because it is trying to create a ckpt file, but the file path is too long and it contains invalid characters.     2024-03-08 14:41:32,112 level=ERROR pid=34216 tid=MainThread logger=modular_inputs.mscs_azure_event_hub pos=utils.py:wrapper:72 | datainput="PFG-AzureEventHub1" start_time=1709908886 | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "L:\Program Files\Splunk\etc\apps\Splunk_TA_microsoft-cloudservices\lib\splunksdc\utils.py", line 70, in wrapper return func(*args, **kwargs) File "L:\Program Files\Splunk\etc\apps\Splunk_TA_microsoft-cloudservices\lib\modular_inputs\mscs_azure_event_hub.py", line 933, in run consumer = self._create_event_hub_consumer(workspace, config, credential, proxy) File "L:\Program Files\Splunk\etc\apps\Splunk_TA_microsoft-cloudservices\lib\modular_inputs\mscs_azure_event_hub.py", line 851, in _create_event_hub_consumer args.consumer_group, File "L:\Program Files\Splunk\etc\apps\Splunk_TA_microsoft-cloudservices\lib\modular_inputs\mscs_azure_event_hub.py", line 238, in open checkpoint = SharedLocalCheckpoint(fullname) File "L:\Program Files\Splunk\etc\apps\Splunk_TA_microsoft-cloudservices\lib\modular_inputs\mscs_azure_event_hub.py", line 103, in __init__ self._fd = os.open(fullname, os.O_RDWR | os.O_CREAT) FileNotFoundError: [Errno 2] No such file or directory: 'L:\\Program Files\\Splunk\\var\\lib\\splunk\\modinputs\\mscs_azure_event_hub\\Endpoint=sb://REDACTED.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=REDACTED-insights-activity-logs-$Default.v1.ckpt'      Here is my inputs.conf file for the add-on     [mscs_azure_event_hub://PFG-AzureEventHub1] account = AzureActivity consumer_group = $Default event_hub_name = insights-activity-logs event_hub_namespace = REDACTED.servicebus.windows.net index = azure-activity interval = 300 max_batch_size = 300 max_wait_time = 10 sourcetype = mscs:azure:eventhub use_amqp_over_websocket = 1      I have been stuck on this for the past couple of days. Any advice would be greatly appreciated!
Recently our TA was rejected for Splunk Cloud compatibility due to a configuration option that would allow our customers to disable SSL verification so that they can make the REST API calls to a serv... See more...
Recently our TA was rejected for Splunk Cloud compatibility due to a configuration option that would allow our customers to disable SSL verification so that they can make the REST API calls to a server that has a self-signed TLS certificate. The TA is using Python code for the inputs, and one of the configuration options when setting up the input was to Enable or Disable SSL Verification.  Customers using servers with self-signed certificates could opt to disable verification.  This would set the verify parameter to the helper.send_http_request to False. This option passed Cloud compatibility until recently when we were notified that external network calls must be made securely and so our TA no longer qualified for Cloud compatibility with the option to set verify=False. Has anyone else ran into this issue and is there a solution other than forcing customers to purchase TLS certificates from a trusted CA? I did see there is an option to the helper.send_http_request call to specify the CA bundle, but we do not have any control over what CA is used to generate the self-signed certificate so there is no way to include a bundle in the TA. Any suggestions are welcome.  
Hello, How to modify _time when running summary index on a scheduled search? Please suggest. I appreciate your help. Thank you When running summary index on a scheduled search, by default, _time... See more...
Hello, How to modify _time when running summary index on a scheduled search? Please suggest. I appreciate your help. Thank you When running summary index on a scheduled search, by default, _time was set to info_min_time, (start time of a search duration), instead of search_now (time when the search run) So, if at this current time I collect the summary index in the last 30 day , the _time will be set to the last 30 days , instead of current time. The problem is if I run a search in the past 24 hours, the data won't show up because the _time is dated the last 30 days, so I had to search in the past 30 days
IHAC that is trying to ingest logs from their self-hosted Trellix instance.   When I try to add an account, the URL field only lists: Global Frankfort India Singapore Sydney There i... See more...
IHAC that is trying to ingest logs from their self-hosted Trellix instance.   When I try to add an account, the URL field only lists: Global Frankfort India Singapore Sydney There is no other input field to specify an actual FQDN/IP.  Am I missing something, or is this feature not present?
Hi Team, Hi Splunk Team, could you guide me through the process on how to consolidate Thousand Eyes into Splunk to centralize alerts on the dashboard? Please, Share me the each and every steps to... See more...
Hi Team, Hi Splunk Team, could you guide me through the process on how to consolidate Thousand Eyes into Splunk to centralize alerts on the dashboard? Please, Share me the each and every steps to process on how to consolidate TE into Splunk. Thanks
Hi Team, I'm currently using Version 8.2.10 and encountered an issue today. It seems that my admin account has disappeared from USERS AND AUTHENTICATION -> Users. I'm perplexed by this occurrence an... See more...
Hi Team, I'm currently using Version 8.2.10 and encountered an issue today. It seems that my admin account has disappeared from USERS AND AUTHENTICATION -> Users. I'm perplexed by this occurrence and would appreciate any insights into why this might have happened. Additionally, I'm seeking guidance on how to prevent similar incidents in the future.