All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunk Experts I have this kind of problem which confuses me. The file being ingested generates another file which has a different filename format but contains the same data. Please see the exampl... See more...
Hi Splunk Experts I have this kind of problem which confuses me. The file being ingested generates another file which has a different filename format but contains the same data. Please see the examples of the data being generated below. (I dont want to use dedup function as well as the scheduled query with piping delete function because it doesnt resolve the root cause problem) /opt/splunk/CompanyX/DAILY/JUL-20_GPW_DAILY_2020_AS_OF_07212020.txt (original) /opt/splunk/CompanyX/DAILY/.JUL-20_GPW_DAILY_2020_AS_OF_07212020.txt.tokUbm (generated) /opt/splunk/CompanyX/DAILY/JUL-19_GPW_DAILY_2020_AS_OF_07202020.txt (original) /opt/splunk/CompanyX/DAILY/.JUL-19_GPW_DAILY_2020_AS_OF_07202020.txt.MjSIIF(generated) /opt/splunk/CompanyX/DAILY/JUL-18_GPW_DAILY_2020_AS_OF_07192020.txt (original) /opt/splunk/CompanyX/DAILY/.JUL-18_GPW_DAILY_2020_AS_OF_07192020.txt.nO9Y5C(generated)   The extraction happens on midnight and goes to a certain directory in which the script replicates into the splunk indexer instance.  My configuration on  inputs.conf: [monitor:///opt/splunk/CompanyX/DAILY/*] disabled = false index=gpw_daily sourcetype=gpw_csv crcSalt=<SOURCE>   The configuration is working properly for the past year and this incident only happens this past week.  So if anyone has encountered this problem please help me to resolve it. Thanks  
I am having an issue getting our 'rest_ta' add on to run properly on a schedule. When I enter the data and 'save' it the call runs properly and data is added to the specified index.  But, it does not... See more...
I am having an issue getting our 'rest_ta' add on to run properly on a schedule. When I enter the data and 'save' it the call runs properly and data is added to the specified index.  But, it does not run after that on the polling interval scheduled, whether this is a number of seconds or a cron scheduled.  I also noticed when reloading that some of the values from the UI are not saved properly  (for example, Polling Interval and Application Key).  I don't think this issue is with the addon itself because if I use just the add on on a clean install everything works as expected.  My guess is another add on is failing somewhere and blocking this from saving properly.  When I click 'Save' the following shows up in the 'splunkd' log file.  I don't see an error but was thinking if I could determine what order things are running in here I could look for the next app that should have run and try to remove/disable it to see if that fixes things.             07-17-2020 14:06:07.157 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/TA-netapp_eseries/bin/rest.py 07-17-2020 14:06:07.157 -0400 INFO ExecProcessor - interval: run once 07-17-2020 14:06:07.157 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/TA-netapp_eseries/bin/rest.py 07-17-2020 14:06:07.157 -0400 INFO ExecProcessor - interval: run once 07-17-2020 14:06:07.157 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/TA-netapp_eseries/bin/rest.py 07-17-2020 14:06:07.157 -0400 INFO ExecProcessor - interval: run once 07-17-2020 14:06:07.157 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/TA-netapp_eseries/bin/rest.py 07-17-2020 14:06:07.157 -0400 INFO ExecProcessor - interval: run once 07-17-2020 14:06:07.157 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/TA-netapp_eseries/bin/rest.py 07-17-2020 14:06:07.157 -0400 INFO ExecProcessor - interval: run once 07-17-2020 14:06:07.157 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/TA-netapp_eseries/bin/rest.py 07-17-2020 14:06:07.157 -0400 INFO ExecProcessor - interval: run once 07-17-2020 14:06:07.157 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/TA-netapp_eseries/bin/rest.py 07-17-2020 14:06:07.157 -0400 INFO ExecProcessor - interval: run once 07-17-2020 14:06:07.157 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/TA-netapp_eseries/bin/rest.py 07-17-2020 14:06:07.157 -0400 INFO ExecProcessor - interval: run once 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/TA-netapp_eseries/bin/rest.py 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - interval: run once 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/TA-netapp_eseries/bin/rest.py 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - interval: run once 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/TA-netapp_eseries/bin/rest.py 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - interval: run once 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/app-elections/bin/ftr_lookups.py 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - interval: run once 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - New scheduled exec process: /opt/splunk/bin/splunkd instrument-resource-usage 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - interval: 0 ms 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/netapp_app_eseries_perf/bin/eseries_folder_hierarchy_gen.py 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - interval: 60000 ms 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/search_activity/bin/CheckDataStats-events.py 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - interval: 600000 ms 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/search_activity/bin/CheckDataStats-search.py 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - interval: 600000 ms 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - interval="0 0 * * *" is a valid cron schedule 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/splunk_app_for_nix/bin/update_hosts.py 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - cron schedule: "0 0 * * *" 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/splunk_app_stream/bin/scripted_inputs/deploy_splunk_ta_stream.py 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - interval: run once 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/splunk_app_stream/bin/scripted_inputs/setup_independent_stream.py 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - interval: run once 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - interval="0 * * * *" is a valid cron schedule 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/splunk_instrumentation/bin/instrumentation.py 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - cron schedule: "0 * * * *" 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/splunk_instrumentation/bin/on_splunk_start.py 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - interval: run once 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - interval="0 0 * * *" is a valid cron schedule 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/splunk_instrumentation/bin/schedule_delete.py 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - cron schedule: "0 0 * * *" 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - New scheduled exec process: python /opt/splunk/etc/apps/splunk_monitoring_console/bin/dmc_config.py 07-17-2020 14:06:07.158 -0400 INFO ExecProcessor - interval: run once    
Hello,  I have a search running that shows the custom "Sign-on_Time" field in a table. I want to format it to a more readable format.  Here is my search: index="foo" host="bar" sourcetype="foobar"... See more...
Hello,  I have a search running that shows the custom "Sign-on_Time" field in a table. I want to format it to a more readable format.  Here is my search: index="foo" host="bar" sourcetype="foobar" OR sourcetype="barfoo" | rex "Session_ID\": \"(?<Session_ID>\w+)\"" | stats values(System_Account) as System_Account values(Authentication_Type) as Authentication_Type values(Sign-on_Time) as Sign-on_Time values(Is_Admin) as Is_Admin count(eval(like(Authentication_Type,"Proxy Started"))) as SA_count values(Task) as Task by Session_ID | where SA_count > 0 | where Is_Admin = 1 | table System_Account Authentication_Type Sign-on_Time Session_ID Is_Admin Task The time comes out like this: Is there a way for me to format it to like (HH MM SS, MM-DD-YY)? In my Sign-on_Time field, I tried doing this: eval signOnTime=strftime(Sign-on_Time,"%a %B %d %Y %H:%M:%S")  and then I tried outputting that in my table and it doesn't show up. What am I doing wrong?
  Hello, What do I need to do to fix these issues?  I have an external drive of 2TB.  Should I move it from C to G?  What do I need to do? When I upload a data file it just freezes. Patrick   ... See more...
  Hello, What do I need to do to fix these issues?  I have an external drive of 2TB.  Should I move it from C to G?  What do I need to do? When I upload a data file it just freezes. Patrick    
Hello, For an On-Prem Controller, the email notification contains the link to the Events but when I click on the link it gets an error ie. "(controller_name.domain) didn't send any data/ ERR_EMPTY_R... See more...
Hello, For an On-Prem Controller, the email notification contains the link to the Events but when I click on the link it gets an error ie. "(controller_name.domain) didn't send any data/ ERR_EMPTY_RESPONSE". This is due to the link in the notification is directing to url which is http but our controller is ssl enabled and the url has https. From where can I fix the link to the controller so that the link to the controller events can have https connection ? Thanks, Emrul Yousof
Hello,  I am using the stats command however the AVG shows as being blank yet min and max works fine:   Index=index_ test source= “Test” host= “Testhost” |stats AVG(timetaken) as AVG   any help ... See more...
Hello,  I am using the stats command however the AVG shows as being blank yet min and max works fine:   Index=index_ test source= “Test” host= “Testhost” |stats AVG(timetaken) as AVG   any help would be greatly appreciated . thanks 
  there has been a huge spike in the number of uploads, resulting in many more failed uploads from throttling than we had before. It is currently unclear to me what caused this. Whether constant ... See more...
  there has been a huge spike in the number of uploads, resulting in many more failed uploads from throttling than we had before. It is currently unclear to me what caused this. Whether constant retries are underlying the huge spike, or some new data being uploaded have caused this. The bucket size has remained pretty constant, but the number of daily uploads has gone from about 80k to 4 million. Looking at some of s3 access logs, it seems like search objects are getting uploaded? Most of these uploads are for "ra" (Report Acceleration bucket) index=_internal host=<XXX> sourcetype=splunkd action=upload status=succeeded NOT cacheId=ra* | rex field=cacheId "bid\|(?<indexname>\w+)\~\w+\~" | timechart span=1m partial=f limit=50 per_second(kb) as kbps by indexname     index=_internal host=<XXX> sourcetype=splunkd action=upload status=succeeded NOT cacheId=ra* | rex field=cacheId "bid\|(?<indexname>\w+)\~\w+\~" | timechart span=1m partial=f limit=50 per_second(kb) as kbps by indexname      
we have  monitors on 2 Windows file paths: [monitor://C:\Data\Data\Disk\SplunkLoad\IsilonCaptures\i*.txt] index = storage_test sourcetype = storage:data [monitor://C:\Data\Data\Disk\SplunkLoad\... See more...
we have  monitors on 2 Windows file paths: [monitor://C:\Data\Data\Disk\SplunkLoad\IsilonCaptures\i*.txt] index = storage_test sourcetype = storage:data [monitor://C:\Data\Data\Disk\SplunkLoad\UnityCaptures\Unity*.csv] index = storage_test sourcetype = storage:unity Filenames like: i2-20200206.txt i4-site2-20200129.txt Unity450-DW-LUNs.csv Unity450-Open-Pools-Site2.csv   The first time after adding these to the app and pushing from the deployment server and having the UF restart it imported MOST of the files except there were a few small, 1 line files. So I de;eted all of the data in the test index and added a crcSalt = <SOURCE> and repushed.  Got the same results. I deleted the data and changed the crcSalt to something different and repushed, pretty much the same results, some but not all files sent for indexing. Now I cannot get it to pull in the files at all.   Any thoughts on what might be going on?
HI Im trying to get data from an object containing an array, and my search returns some of the results but i cant see why I dont get them all.  The data looks like this:   { "severity": "... See more...
HI Im trying to get data from an object containing an array, and my search returns some of the results but i cant see why I dont get them all.  The data looks like this:   { "severity": "INFO", "name": "C758JH9", "items": [ { "Name": "C758JH9", "Operating System": "Microsoft Windows 10 Enterprise", "ArticleID": "2920724", "ResourceId": "16783579", "LastStatusCheckTime": "20200713175056.983000+000", "DateCreated": "20170710214528.000000+000", "LocalizedDisplayName": "Update for Microsoft Office 2016 (KB2920724) 32-Bit Edition", "LastStatusCheckTime1": "20200713175056.983000+000", "LastLogonUserName": "saurpal", "LastLogonTimestamp": "20200703164437.000000+***", "Status CHnage": "20200713175056.983000+000", "Superseeded": "False", "Status": "INSTALLED" }, { "Name": "C758JH9", "Operating System": "Microsoft Windows 10 Enterprise", "ArticleID": "2920712", "ResourceId": "16783579", "LastStatusCheckTime": "20200713175057.787000+000", "DateCreated": "20170710214536.000000+000", "LocalizedDisplayName": "Update for Microsoft Office 2016 (KB2920712) 32-Bit Edition", "LastStatusCheckTime1": "20200713175057.787000+000", "LastLogonUserName": "saurpal", "LastLogonTimestamp": "20200703164437.000000+***", "Status CHnage": "20200713175057.787000+000", "Superseeded": "False", "Status": "INSTALLED" }, { "Name": "C758JH9", "Operating System": "Microsoft Windows 10 Enterprise", "ArticleID": "2920727", "ResourceId": "16783579", "LastStatusCheckTime": "20200713175056.407000+000", "DateCreated": "20170710214612.000000+000", "LocalizedDisplayName": "Security Update for Microsoft Office 2016 (KB2920727) 32-Bit Edition", "LastStatusCheckTime1": "20200713175056.407000+000", "LastLogonUserName": "saurpal", "LastLogonTimestamp": "20200703164437.000000+***", "Status CHnage": "20200713175056.407000+000", "Superseeded": "False", "Status": "INSTALLED" }, { "Name": "C758JH9", "Operating System": "Microsoft Windows 10 Enterprise", "ArticleID": "3114690", "ResourceId": "16783579", "LastStatusCheckTime": "20200713175057.047000+000", "DateCreated": "20170710214844.000000+000", "LocalizedDisplayName": "Security Update for Microsoft Office 2016 (KB3114690) 32-Bit Edition", "LastStatusCheckTime1": "20200713175057.047000+000", "LastLogonUserName": "saurpal", "LastLogonTimestamp": "20200703164437.000000+***",     The set is much bigger, this one set has 77 entries, im trying to get a table to show the LocalizedDisplayName and the Status, can be one of a few entires. When i run the blow search it returns me 25 records.   index="patching" | spath "name" | search name=LEWKPW10DSK121 | spath | fields - _raw _time | rename items{}.* as * | eval data=mvzip(mvzip(LocalizedDisplayName,Status),ArticleID) | fields data | mvexpand data | makemv data delim="," | eval LocalizedDisplayName=mvindex(data,0) | eval Status=mvindex(data,1) | eval ArticleID=mvindex(data,2) | table Status LocalizedDisplayName ArticleID     Any pointers would be great. Thanks.
Hi everyone, I am trying to create a timechart showing distribution of accesses in last 24h filtered through stats command. More precisely I am sorting services with low accesses number but higher t... See more...
Hi everyone, I am trying to create a timechart showing distribution of accesses in last 24h filtered through stats command. More precisely I am sorting services with low accesses number but higher than 2 and considerating only 4 less accessed services using this: index = |bin _time span=1h | stats count by Service _time | where count>2 | sort 4 count | rename count as "Access number" | timechart span=1h count by Service   Results would show services with number of accesses of 1 or 2 in a day despite the where clause. Thank you in advance for your help.
Hello, I am trying to span for 1 week and 1 month chart from the summary index search, but When in use | bin span=1w, instead of showing the last or latest data of week it is summing the weeks total.... See more...
Hello, I am trying to span for 1 week and 1 month chart from the summary index search, but When in use | bin span=1w, instead of showing the last or latest data of week it is summing the weeks total. I am looking for trend chart, where to display first or last data of a week or month. i used same bin command earlier and but this time one difference is i a, using stats. I use the query in the following format    
Hi, I am trying to set-up LDAP authentication. The target LDAP host is AD LDS on Windows server 2012R2. However, I encountered the following error when adding new LDAP strategy. "Failed to retrie... See more...
Hi, I am trying to set-up LDAP authentication. The target LDAP host is AD LDS on Windows server 2012R2. However, I encountered the following error when adding new LDAP strategy. "Failed to retrieve a group with these settings. Consult your LDAP admin or see splunkd.log with ScopedLDAPConnection set to DEBUG"   Probably, It is caused that the LDAP on AD LDS does not have any attributes class is group. Unfortunately, I am not able to change LDAP configurations on the AD LDS because it is managed by other organization.   So do you have any ways to use other class attributes as group on Splunk? Thank you.
Hi, We are trying to set up around 60 alerts. Ideally, Each alert is set up to run every 3 minutes and check the data for the last 3 minutes. I am aware of the issue with concurrent searches and aler... See more...
Hi, We are trying to set up around 60 alerts. Ideally, Each alert is set up to run every 3 minutes and check the data for the last 3 minutes. I am aware of the issue with concurrent searches and alerts getting skipped when there are more than 5 concurrent searches.  What is the best way to create these alerts? Is there a way to set up the alerts to run between minutes like below example?  Example -  Alert 1 - 12:00:00 Alert 2 - 12:00:05 Alert 3 - 12:00:10 Alert 4 - 12:00:15
I would like to obtain the results of two tables. | dbxquery query = "select * from table1 " connection = "Connection1" Fields table1: ID_USER | NR_CARD | DT_CARD | dbxquery query = "select * f... See more...
I would like to obtain the results of two tables. | dbxquery query = "select * from table1 " connection = "Connection1" Fields table1: ID_USER | NR_CARD | DT_CARD | dbxquery query = "select * from table2 " connection = "Connection1" Fields table2:  ID_USER | DS_EMAIL | DS_NAME the common fields between the two tables is: "ID_USER". How to join tables 1 and 2 and transfer all fields? [table1 / 2] ID_USER | [table1] | NR_CARD | [table1] DT_CARD | [table2] DS_EMAIL | [table2] DS_NAME
We send data to Splunk Cloud from Universal Forwarder. I want to add _meta to each event sent to the Splunk Cloud. I've added _meta to each stanza in the inputs.conf and restarted the Forwarder, but... See more...
We send data to Splunk Cloud from Universal Forwarder. I want to add _meta to each event sent to the Splunk Cloud. I've added _meta to each stanza in the inputs.conf and restarted the Forwarder, but the meta does not appear in the Splunk Cloud     [default] host = HOSTNAME index = INDEX source = SOURCE # Monitor NGINX Logs [monitor:///var/log/nginx/access.json.log] disabled = false sourcetype = SOURCETYPE _meta = region::sae1 ...       What could I miss? Is it possible to add the meta without changes in the Splunk Cloud?
Hello,  I have a simple distributed search config on a windows host, 1 SH, 1 IDX and 1 License server. Running a search from the SH give me a warning  : "Search filters specified using splunk_server... See more...
Hello,  I have a simple distributed search config on a windows host, 1 SH, 1 IDX and 1 License server. Running a search from the SH give me a warning  : "Search filters specified using splunk_server/splunk_server_group do not match any search peer." And the search does not return any results.  (searching for index=_internal) The answers found on this same topic over here do not seem to solve the problem for me.  I recreated the user and the role, no success, I recreated the search peer, without success.  Status under distributed search is healthy and replication status is Successful.  Any suggestions what i could do to get distributed search up and running? Richard
My splunk search returns one event as below: notice agent data is in a nested json format.  agentName and agentSwitch are nested fields within agent.   I would like to filter within this result ... See more...
My splunk search returns one event as below: notice agent data is in a nested json format.  agentName and agentSwitch are nested fields within agent.   I would like to filter within this result so that the output would only display  agentName = "ether" and agentSwitchName="soul".      I have tried to filter using spath and table but each time it would return ALL agentNames, how can i correctly filter the output?   My search | spath | table environemnt, agent{}.agentName | search agent{}.agentName="ether"
Hi I noticed that our   o365 message tracing logs stopped getting indexed using  Microsoft Office 365 Reporting Add-on for Splunk v 1.2.1 This a sample error message we got:   2020-07-20 13:19:32... See more...
Hi I noticed that our   o365 message tracing logs stopped getting indexed using  Microsoft Office 365 Reporting Add-on for Splunk v 1.2.1 This a sample error message we got:   2020-07-20 13:19:32,756 ERROR pid=6727 tid=MainThread file=base_modinput.py:log_error:309 | HTTP Request error: 400 Client Error: Bad Request for url: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2020-07-01T00:00:00Z'%20and%20EndDate%20eq%20datetime'2020-07-01T00:15:00Z'   I removed the ? in the "MessageTrace?$filter=StartDate"  part of the URL in this file input_module_ms_o365_message_trace.py # Currently "$orderby=Received asc" does not work when retrieving messages with Skiptoken. Just drop "Received asc" then it works. #microsoft_trace_url = "https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$orderby=Received asc&$filter=StartDate eq datetime'%sZ' and EndDate eq datetime'%sZ'" % (start_date.isoformat(), end_date.isoformat()) # cwi remove ? from filter #microsoft_trace_url = "https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate eq datetime'%sZ' and EndDate eq datetime'%sZ'" % (start_date.isoformat(), end_date.isoformat()) microsoft_trace_url = "https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?filter=StartDate eq datetime'%sZ' and EndDate eq datetime'%sZ'" % (start_date.isoformat(), end_date.isoformat()) messages = get_messages(helper, microsoft_trace_url, global_microsoft_office_365_username, global_microsoft_office_365_password)   The input is working on our installation now.
Hi! How can I change font size and family in the dashboard beta app? I can change font size / family by using code but they only work in text box.     "fontSize": 21, ... See more...
Hi! How can I change font size and family in the dashboard beta app? I can change font size / family by using code but they only work in text box.     "fontSize": 21, "fontFamily": "Helvetica"       I need to change font in table and charts too, but the same code that works in the text box, won't work anywhere else. I can only change font color.   Here is example from my code that does not work:      "type": "viz.area", "options": { "axisY2.enabled": true, "backgroundColor": "transparent", "fontColor": "#FF0000", "legend.placement": "top", "seriesColors": "[#5FBCFF, #C6335F]", "axisTitleY.text": "Orders", "axisTitleX.text": "Date", "fontSize": 18, "fontFamily": "Verdana"       This code only changes font color, but not font size or font family.    Thanks in advance. 
Hello: Here's the tech stacks i am using : AWS EKS 1.16 Deployed docker based yaml "splunk/splunk:latest" to EKS using default - "SPLUNK_START_ARGS=--accept-license" as well as SPLUNK_PASSWORD pr... See more...
Hello: Here's the tech stacks i am using : AWS EKS 1.16 Deployed docker based yaml "splunk/splunk:latest" to EKS using default - "SPLUNK_START_ARGS=--accept-license" as well as SPLUNK_PASSWORD provided. I used statefulset and service, with service (loadbalancer enabled), i am able to directly access it from my laptop, however with ingress enabled, it is not happening. Even I created route53 pointing to service (load balancer AWS), still not able to access. I also created "hello world" (contianing statefulset, service and ingress), i am able to open,. Kindly let me know if anything is to be corrected when using ingress ?