All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I need to create a search that will display results based on a specific value. My issue is that the following search does not return any result. In penultimate line, when I replace user_i... See more...
Hello, I need to create a search that will display results based on a specific value. My issue is that the following search does not return any result. In penultimate line, when I replace user_ip by index_field1="1.2.3.4" it works and when I remove both last lines I can see user_ip well contains "1.2.3.4"... But index_field1=user_ip does not match, same for index_field2... index=... | eval field1="1.2.3.4:100" | rex field=src_ip_port "(?<user_ip>.+)\:(?<user_port>.+)" | table user_ip user_port | search index_field1=user_ip index_field2=user_port | table index_field1 index_field2 user_ip user_port Thanks by advance for your feedback.
Has anyone worked on  dynamic plotting/highlighting some areas in a custom image based on splunk SPL condition.
Hi All, I am trying to monitor files and folders in network path using a basic (only the outline) Python script shown below. My Splunk environment is currently having Python version 2.7.  On runnin... See more...
Hi All, I am trying to monitor files and folders in network path using a basic (only the outline) Python script shown below. My Splunk environment is currently having Python version 2.7.  On running this script - getting an error like "no module named requests"/ "no module named time". Fairly new into Splunk, so not sure how to integrate modules into the environment. Also Pip install requests command also failing with error. Could you please help with the workaround for the same. Thanks in advance! import requests import os import time #path of network directory path =(input()) x = os.listdir(path) creation_time = os.path.getctime(path) local_time = time.ctime(creation_time) print("Creation Time:", local_time) # to read lines of text as a list with open(path) as f:     line=f.readlines() print(line) #Number of files or folders in network directory: num = len(x) print("Number of files/folders in directory:", num)  
Hi  I want to deploy a new configuration in inputs.conf file for a specific server from the deployment server.   Can anyone suggest some detailed steps?   Thanks in advance         
In Splunk dashboard I will show one table.  In this table column "host" I want apply CSS condition based on renderer via jQuery. It also worked.    But problem was I want pass arguments from simple x... See more...
In Splunk dashboard I will show one table.  In this table column "host" I want apply CSS condition based on renderer via jQuery. It also worked.    But problem was I want pass arguments from simple xml. Now hard code in jQuery below is table for reference,      In This jQuery function customeCssLoad() I want pass args from simple xml to that function. Please help me 
How can we extract the list of items from below code:   <s:key name="read"> <s:list> <s:item>rest_poc</s:item> <s:item>poc</s:it... See more...
How can we extract the list of items from below code:   <s:key name="read"> <s:list> <s:item>rest_poc</s:item> <s:item>poc</s:item>   Required output should be list of items for key name "read". Example: rest_proc poc   <?xml version="1.0" encoding="UTF-8"?> <!--This is to override browser formatting; see server.conf[httpServer] to disable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .--> <?xml-stylesheet type="text/xml" href="/static/atom.xsl"?> <entry xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>index=sample_idx | stats count</title> <id>https://hostname.com.au:8089/services/search/jobs/1528783903.136065</id> <updated>2018-06-12T16:14:02.544+10:00</updated> <link href="/services/search/jobs/1528783903.136065" rel="alternate"/> <published>2018-06-12T16:11:43.000+10:00</published> <link href="/services/search/jobs/1528783903.136065/search.log" rel="search.log"/> <link href="/services/search/jobs/1528783903.136065/events" rel="events"/> <link href="/services/search/jobs/1528783903.136065/results" rel="results"/> <link href="/services/search/jobs/1528783903.136065/results_preview" rel="results_preview"/> <link href="/services/search/jobs/1528783903.136065/timeline" rel="timeline"/> <link href="/services/search/jobs/1528783903.136065/summary" rel="summary"/> <link href="/services/search/jobs/1528783903.136065/control" rel="control"/> <author> <name>rest_poc</name> </author> <content type="text/xml"> <s:dict> <s:key name="canSummarize">0</s:key> <s:key name="cursorTime">2038-01-19T14:14:07.000+11:00</s:key> <s:key name="defaultSaveTTL">604800</s:key> <s:key name="defaultTTL">300</s:key> <s:key name="delegate"></s:key> <s:key name="diskUsage">65536</s:key> <s:key name="dispatchState">DONE</s:key> <s:key name="doneProgress">1.00000</s:key> <s:key name="dropCount">0</s:key> <s:key name="earliestTime">1970-01-01T10:00:00.000+10:00</s:key> <s:key name="eventAvailableCount">0</s:key> <s:key name="eventCount">0</s:key> <s:key name="eventFieldCount">0</s:key> <s:key name="eventIsStreaming">1</s:key> <s:key name="eventIsTruncated">1</s:key> <s:key name="eventSearch"></s:key> <s:key name="eventSorting">desc</s:key> <s:key name="isBatchModeSearch">0</s:key> <s:key name="isDone">1</s:key> <s:key name="isEventsPreviewEnabled">0</s:key> <s:key name="isFailed">0</s:key> <s:key name="isFinalized">0</s:key> <s:key name="isPaused">0</s:key> <s:key name="isPreviewEnabled">0</s:key> <s:key name="isRealTimeSearch">0</s:key> <s:key name="isRemoteTimeline">0</s:key> <s:key name="isSaved">0</s:key> <s:key name="isSavedSearch">0</s:key> <s:key name="isTimeCursored">0</s:key> <s:key name="isZombie">0</s:key> <s:key name="keywords"></s:key> <s:key name="label"></s:key> <s:key name="normalizedSearch"></s:key> <s:key name="numPreviews">0</s:key> <s:key name="optimizedSearch">index=sample_idx | stats count</s:key> <s:key name="pid">9035</s:key> <s:key name="pid">9035</s:key> <s:key name="priority">5</s:key> <s:key name="provenance"></s:key> <s:key name="remoteSearch"></s:key> <s:key name="reportSearch">index=sample_idx | stats count</s:key> <s:key name="resultCount">5</s:key> <s:key name="resultIsStreaming">0</s:key> <s:key name="resultPreviewCount">5</s:key> <s:key name="runDuration">0.015</s:key> <s:key name="sampleRatio">1</s:key> <s:key name="sampleSeed">0</s:key> <s:key name="scanCount">0</s:key> <s:key name="searchCanBeEventType">0</s:key> <s:key name="searchTotalBucketsCount">0</s:key> <s:key name="searchTotalEliminatedBucketsCount">0</s:key> <s:key name="sid">1528783903.136065</s:key> <s:key name="statusBuckets">0</s:key> <s:key name="ttl">300</s:key> <s:key name="performance"> <s:dict> <s:key name="command.head"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> <s:key name="input_count">35</s:key> <s:key name="output_count">5</s:key> </s:dict> </s:key> <s:key name="command.inputlookup"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> <s:key name="input_count">0</s:key> <s:key name="output_count">172</s:key> </s:dict> </s:key> <s:key name="command.stats"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> <s:key name="input_count">0</s:key> <s:key name="output_count">35</s:key> </s:dict> </s:key> <s:key name="dispatch.check_disk_usage"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.createdSearchResultInfrastructure"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.evaluate"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.evaluate.head"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.evaluate.inputlookup"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.evaluate.stats"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.optimize.FinalEval"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.optimize.matchReportAcceleration"> <s:dict> <s:key name="duration_secs">0.004</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.optimize.optimization"> <s:dict> <s:key name="duration_secs">0.006</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.optimize.reparse"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.optimize.toJson"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.optimize.toSpl"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.writeStatus"> <s:dict> <s:key name="duration_secs">0.007</s:key> <s:key name="invocations">7</s:key> </s:dict> </s:key> <s:key name="startup.configuration"> <s:dict> <s:key name="duration_secs">0.089</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="startup.handoff"> <s:dict> <s:key name="duration_secs">0.003</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> </s:dict> </s:key> <s:key name="fieldMetadataStatic"> <s:dict> <s:key name="Description"> <s:dict> <s:key name="type">unknown</s:key> <s:key name="groupby_rank">0</s:key> </s:dict> </s:key> </s:dict> </s:key> <s:key name="fieldMetadataResults"> <s:dict> <s:key name="Description"> <s:dict> <s:key name="type">unknown</s:key> <s:key name="groupby_rank">0</s:key> </s:dict> </s:key> </s:dict> </s:key> <s:key name="messages"> <s:dict/> </s:key> <s:key name="request"> <s:dict> <s:key name="search">index=sample_idx | stats count</s:key> </s:dict> </s:key> <s:key name="runtime"> <s:dict> <s:key name="auto_cancel">0</s:key> <s:key name="auto_pause">0</s:key> </s:dict> </s:key> <s:key name="eai:acl"> <s:dict> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>rest_poc</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>rest_poc</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="owner">rest_poc</s:key> <s:key name="modifiable">1</s:key> <s:key name="sharing">global</s:key> <s:key name="app">search</s:key> <s:key name="can_write">1</s:key> <s:key name="ttl">300</s:key> </s:dict> </s:key> <s:key name="searchProviders"> <s:list/> </s:key> </s:dict> </content> </entry>   Please help, thanks in advance.
Hi, First time I have ever seen this, but curious if its just me. I have a search defined as: <search id="device_base_index"> <query> index=oi sourcetype=device earliest=-30d@d latest=+2d@d ... See more...
Hi, First time I have ever seen this, but curious if its just me. I have a search defined as: <search id="device_base_index"> <query> index=oi sourcetype=device earliest=-30d@d latest=+2d@d </query> </search> And a table as: <table> <title>Data Readiness</title> <search base="device_base_index"> <query>fields deviceId inventoryStatus configStatus | eval ic=configStatus+"::"+inventoryStatus | makemv delim="::" ic | mvexpand ic | streamstats count by deviceId | eval status=if(count = 1, "config", "inventory") | fields deviceId status ic | chart count over status by ic</query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> The dashboard only shows the results from the base_search and doesnt include the results as if it was passed through the  the table part of the query.  When I click on the magnifying glass, it loads up the full search - so I know the query and base search are attached at some point. The other strange thing is when I look at the log, it only shows the base search: Job Details Dashboard OptimizedSearch: | search (earliest=-30d@d index=oi latest=+2d@d sourcetype=device) But in the search.log it does see both parts of the full query: Expanded index search = (index=oi sourcetype=device _time>=1653314400.000 _time<1656079200.000) base lispy: [ AND index::oi sourcetype::device ]  But then it sees the other part of the query: PARSING: postprocess "fields deviceId inventoryStatus configStatus etc... search.log contains no ERROR messages.   If I add the query to the table and dont use the base-search it all runs fine.   Any Ideas why the base search and table query are not executed and only the base_search part is executed?   cheers -brett
Anyone know where I can request an annual application security assessment report for Splunk product? I am looking for splunk UBA latest security assessment  such as  DAST , SAST  or Pentest report.... See more...
Anyone know where I can request an annual application security assessment report for Splunk product? I am looking for splunk UBA latest security assessment  such as  DAST , SAST  or Pentest report.   Thank you
When I am using Splunk Web to perform a date-range (or date and time range) search, the Date Picker is in the US date format (MM/DD/YYYY) rather than the Rest Of The World Format (DD/MM/YYYY), even t... See more...
When I am using Splunk Web to perform a date-range (or date and time range) search, the Date Picker is in the US date format (MM/DD/YYYY) rather than the Rest Of The World Format (DD/MM/YYYY), even though I am logged in to Splunk with the en-GB locale/URL path. Everything else works correctly as far as I can tell for en-GB settings, but, the date picker for Date Range and Date and Time Range, seems stuck in the wrong format. Here's an example of what I am seeing: Date Picker   As you can see, it's MM/DD/YYYY. I've tried multiple browsers and operating systems, and the issue very much seems to be server-side, somewhere. This is on Splunk Enterprise 8.2.5. However, I should add that this is only impacting some newer servers (1-2 years old) that I've set up; systems running other operating systems, but, also on 8.2.5, are not exhibiting this behaviour (either in the Search Head or if I go directly to an Indexer). I've also tried creating new/test users, and it impacts them as well, and there's nothing obvious in my user-prefs.conf that seems to relate to this, either on the working systems, or the non-working. This must be so stunningly obvious that I'm just missing it.  Any suggestions appreciated!  
So we are seeing some error pertaining to our cluster master being an unhealthy instance, we have a link called Generate Diag but when we click it the message reads: The app "splunk_rapid_diag" is no... See more...
So we are seeing some error pertaining to our cluster master being an unhealthy instance, we have a link called Generate Diag but when we click it the message reads: The app "splunk_rapid_diag" is not available we are on version 9.0.0 of Splunk Enterprise, the following Splunk Documentation says how to get to it, but we do not see that option https://docs.splunk.com/Documentation/Splunk/9.0.0/Troubleshooting/Rapiddiag How do I access RapidDiag? The RapidDiag UI is located in the Settings menu, under System > RapidDiag.  
Dear Community, How do I display values from second dropdown values based on first dropdown value.   <input type="dropdown" token="site" searchWhenChanged="true"> | inputlookup regions_instan... See more...
Dear Community, How do I display values from second dropdown values based on first dropdown value.   <input type="dropdown" token="site" searchWhenChanged="true"> | inputlookup regions_instances.csv | fields region region_value   <input type="dropdown" token="instance" searchWhenChanged="true"> | inputlookup regions_instances.csv | search region=$site$ | fields  instance instance_value    
Hi Splunkers,  I have an issue with the timestamp the data is being indexed. Here is an example of my logs. I applied the props at sourcetype level. However it doesn't seem to be working- Please He... See more...
Hi Splunkers,  I have an issue with the timestamp the data is being indexed. Here is an example of my logs. I applied the props at sourcetype level. However it doesn't seem to be working- Please Help Scenario -1 Time                                                                  Event 6/20/22  10:35:59.833 PM               2022-06-20 18:35:59,833  [200] Error logs http client  props.conf TIME_FORMAT= %Y-%m-%d %H:%M:%S,%3N TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 24 TZ = UTC Scenario - 2 Time                                                                  Event 6/20/22 10:24:05.000 PM                  2022-06-20 22:23:53 Error logs http client       
Hello, I am digging through my _audit index to see what searches people are running over time, but I am confused by the following fields. api_et , api_It apiStartTime, apiEndTime It would... See more...
Hello, I am digging through my _audit index to see what searches people are running over time, but I am confused by the following fields. api_et , api_It apiStartTime, apiEndTime It would appear that api_et and apiEndTime are the same thing.  same with api_lt, and api_StartTime.   I get that api_(el)t are epoch times, and the others are formatted dates. Why do some entries (of type search) have api_et, api_lt, and others have apiStartTime,apiEndTime?  Thus far I have to do any calculations based on the presence of both sets and use coalesce to choose between the one that's not bogus. --jason    
Hello, I have a linux machine where Splunk Enterprise is installed and I would like to use Heavy forwarder to send the files to the cloud. How do I install the "app"(splunkclouduf.spl)  from the ... See more...
Hello, I have a linux machine where Splunk Enterprise is installed and I would like to use Heavy forwarder to send the files to the cloud. How do I install the "app"(splunkclouduf.spl)  from the cloud instance in Splunk Enterprise?  I don't have access to the Splunk Enterprise web interface, only access to the linux machine. Regards
First of all I am new to cyber, and got splunk dumped in my lap. I am really trying to get knowledgeable on it but 1) I am horrible with coding and apparently that includes Regex 2) Long lines of c... See more...
First of all I am new to cyber, and got splunk dumped in my lap. I am really trying to get knowledgeable on it but 1) I am horrible with coding and apparently that includes Regex 2) Long lines of code or search strings is like sensory overload to me That being said, I am trying to clean up our alerts as we are getting bogged down daily with well over 3k alerts that could most likely be expunged. Many of our alerts are based on tstat search strings. It shows a great report but I am unable to get into the nitty gritty. For example, the brute force string below, it brings up a Statistics table with various elements (src, dest, user, app, failure, success, locked) showing failure vs success counts for particular users who meet the criteria in the string.  My issue, I try to click on a user, choose view events, brings up new search with a modified string (of course) but still only shows tstats table, but with different headers (action, src, det, user, app, count, failure, success).  What I would like to do is when I click to choose view event for a particular user, it actually shows me that even and correlating log input. Is this possible? Why would I want an brute force alert if I cannot narrow down to the events, especially the failure logins? Again, please have mercy, I am entry level and still learning splunk. I love the apps and abilities it has but using the search box is like i lost all my intelligence. The brute force search | tstats `summariesonly` values(Authentication.app) as app,count from datamodel=Authentication.Authentication by Authentication.action, Authentication.src, Authentication.dest, Authentication.user | `drop_dm_object_name("Authentication")` | search user!=unknown user!=SYSTEM app!=splunkd_remote_searches src!=MWG* user!=TWC-* | eval success=if(action="success",count,0),failure=if(action="failure",count,0) | stats values(dest) as dest, values(user) as user, values(app) as app, sum(failure) as failure, sum(success) as success by src | join user [search index=top_wineventlog EventCode=4740 | eventstats count(user) as locked_count by user | dedup user, host | table user, locked_count] | search failure>30 success>0 | where failure>success  
I have an event which is constructed like the following:   { name: string, time: string, duration: string, logs: JSONObjects[] }   When I download the event, I just want the logs ... See more...
I have an event which is constructed like the following:   { name: string, time: string, duration: string, logs: JSONObjects[] }   When I download the event, I just want the logs which is everything inside [] but without the head part which is "{logs:" and the last "}" To do that how do I construct the search query? 
I have this query and I want to count how many logins were made by id, like if a person logged in 3 times I just want to count once and if there were 15 logins in total I just want to count one per i... See more...
I have this query and I want to count how many logins were made by id, like if a person logged in 3 times I just want to count once and if there were 15 logins in total I just want to count one per id basic search | fields idLogin | stats values(idLogin) as Login, dc(idLogin) as Quantity | table Quantity   but my field idLogin is return null 
I have configured the Splunk Add-on for Google Workspace on a Heavy Forwarder that is performing data collection and then forwarding the data to Splunk Cloud. We followed the instructions at https:... See more...
I have configured the Splunk Add-on for Google Workspace on a Heavy Forwarder that is performing data collection and then forwarding the data to Splunk Cloud. We followed the instructions at https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/About both when configuring the Google Cloud service account and configuring the Add-On. I configured the Add-On with the Google Cloud service account with the JSON key generated on console.cloud.google.com and then configured the inputs. We are not getting any data and when we look at the internal logs from the Heavy Forwarder where the Splunk Add-on for Google Workspace is deployed we are seeing 401 responses like the following:         requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://admin.googleapis.com/admin/reports/v1/activity/users/all/applications/token?maxResults=1000&startTime=2022-06-22T19%3A07%3A10.464Z&endTime=2022-06-22T19%3A07%3A10.464Z requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://admin.googleapis.com/admin/reports/v1/activity/users/all/applications/drive?maxResults=1000&startTime=2022-06-22T19%3A07%3A10.521Z&endTime=2022-06-22T19%3A07%3A10.521Z         We also went through the troubleshooting section of the docs: https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/Troubleshoot to no avail Any guidance from some one who has deployed the GWS Add-On and gotten a 401 after configuring the inputs will be greatly appreciated
Hi Good Afternoon, Our Heavy Forwarder is unable to forward to one of the indexer but able to send data another indexer. Here is what I saw in splunkd.log of Heavy Forwarder: 06-22-2022 13:24:03.... See more...
Hi Good Afternoon, Our Heavy Forwarder is unable to forward to one of the indexer but able to send data another indexer. Here is what I saw in splunkd.log of Heavy Forwarder: 06-22-2022 13:24:03.471 -0400 ERROR TcpOutputFd [19320 TcpOutEloop] - Read error. Connection reset by peer 06-22-2022 13:24:03.472 -0400 ERROR TcpOutputFd [19320 TcpOutEloop] - Read error. Connection reset by peer 06-22-2022 13:24:03.472 -0400 ERROR TcpOutputFd [19320 TcpOutEloop] - Read error. Connection reset by peer 06-22-2022 13:24:03.472 -0400 WARN AutoLoadBalancedConnectionStrategy [19320 TcpOutEloop] - Applying quarantine to ip=xx.xx.xxx.xxx port=9996 _numberOfFailures=2 06-22-2022 13:24:03.473 -0400 ERROR TcpOutputFd [19320 TcpOutEloop] - Read error. Connection reset by peer 06-22-2022 13:24:03.473 -0400 WARN AutoLoadBalancedConnectionStrategy [19320 TcpOutEloop] - Applying quarantine to ip=yy.yy.yy.yy port=9996 _numberOfFailures=2
  Hello Would anyone be able to help me with this  in Dashboard Studio? I have a date time picker and only want to display the between from and to time. I don't want the rest of the options to ... See more...
  Hello Would anyone be able to help me with this  in Dashboard Studio? I have a date time picker and only want to display the between from and to time. I don't want the rest of the options to be visible. Please see attached picture. The stuff in the red boxes should not be visible. OR Just have the calendar visible to pick dates.