All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We log Puppet facts in a large JSON payload, and I want to combine the values of all fields matching a wildcarded expression into a single field to process further. A given server may have any numbe... See more...
We log Puppet facts in a large JSON payload, and I want to combine the values of all fields matching a wildcarded expression into a single field to process further. A given server may have any number of IP addresses associated with it, eg: networking.interfaces.eth0.ip networking.interfaces.eth1.ip networking.interfaces.eth1:1.ip networking.interfaces.eth1:10.ip networking.interfaces.eth1:11.ip networking.interfaces.eth1:12.ip networking.interfaces.eth1:13.ip networking.interfaces.eth1:14.ip networking.interfaces.eth1:15.ip networking.interfaces.eth1:16.ip networking.interfaces.eth1:17.ip networking.interfaces.eth1:2.ip networking.interfaces.eth1:3.ip networking.interfaces.eth1:4.ip networking.interfaces.eth1:5.ip networking.interfaces.eth1:6.ip networking.interfaces.eth1:7.ip networking.interfaces.eth1:8.ip networking.interfaces.eth1:9.ip Where each of these fields is an IP address, I'd like to append them into a single field; I've tried this with coalesce, but that doesn't support wildcards; mvappend... just doesn't seem to do anything, or perhaps I'm just using it incorrectly with foreach. tl;dr: given a variable number of fields networking.interfaces.*.ip, how do I concatenate them all into delimited field ip_addresses, a la '10.11.12.13 | 10.11.12.14 | 10.11.12.15 | ...' ? Thanks.
I have 2 indexes, one called linux and another called firewall, how can I correlate both indexes to determine if the src field (of the linux index) is equal to the UserIP field (of the firewall index... See more...
I have 2 indexes, one called linux and another called firewall, how can I correlate both indexes to determine if the src field (of the linux index) is equal to the UserIP field (of the firewall index)  
Greeting Splunkers,  I'm researching an issue with Splunk scheduled reports and I came across the .conf2017 material "Making the Most of the Splunk Scheduler" (see attached snippet of page 10 of the... See more...
Greeting Splunkers,  I'm researching an issue with Splunk scheduled reports and I came across the .conf2017 material "Making the Most of the Splunk Scheduler" (see attached snippet of page 10 of the material).    The issue we're seeing is some scheduled jobs are not returning results but when the same SPL that's in the job is run real-time there are results.    The jobs are scheduled as a cron within the Splunk UI Schedule "Run on Cron Schedule".    I came cross the .conf2017 material and maybe found an issue or concern related to the issue.   Can anyone please clarify a couple things:  - the material mentions that Cron is "Limited to a single machine".   What does this mean and how does Splunk determine which machine/server to utilize?    - we schedule most of jobs as Cron because it has a little more flexibility with the time to set the start time.  I also came across the limits.conf and authorized.conf documentation and found that all of the Splunk settings are still set to the default.   In further researching the issue, it seems there are approximately 30 jobs starting or running at 0400 when the job in question is not returning results.    So the other question is, are we hitting a system limit and can Splunk be optimized or tweaked to support more jobs and/or is the system limit causing the report to return no results?   If Splunk can be optimized/tweaked which parameters or settings needs to be changed?    Any thoughts?   Thanks in advance for any help and insight.    Cheers!
How to get a health status (via GUI) and basics of troubleshooting it. Also please advise on how to check Splunk Ent. & ES heart beats making sure they are alive & kicking? For my daily checks.Thanks
We're running a Splunk Cloud environment and are trying to figure out how we could trigger an on premise script to restart a service when Splunk sees issues in the logs. From what I've read in docume... See more...
We're running a Splunk Cloud environment and are trying to figure out how we could trigger an on premise script to restart a service when Splunk sees issues in the logs. From what I've read in documentation we'd be looking to run an adaptive response action, but that seems to be limited to ES customers. We're not currently paying for ES but it lists the action on a general Splunk Cloud page. Hybrid search from an on premise heavy forwarder sounded like another alternative, but it does not allow scheduled searches.   Moreso I'm looking for input on what other Splunk Cloud customers are doing to run scripts on premise from alerts.   From Splunk Cloud service description: - Splunk Cloud Platform does not provide system-level access. This means you cannot define alerts that run operating-system scripts or use other system services (although vetted and compatible apps can do so). Alerts can be sent by email or HTTPS POST using Splunk software webhooks. You might be required to set up an endpoint inside your network. If you have both Splunk Enterprise and Splunk Cloud Platform, you can run an on-premises search head to support searches that require alert actions. For more information, see Set up an Adaptive Response relay in the Administer Splunk Enterprise Security Manual. https://docs.splunk.com/Documentation/SplunkCloud/8.1.2101/Service/SplunkCloudservice  From hybrid search documentation: - Only ad-hoc searches are supported. Scheduled searches are not supported. https://docs.splunk.com/Documentation/SplunkCloud/8.1.2101/User/SearchCloudfromEnterprise 
Need to get a new line (\n) after the value, is it possible ? eval check=case( 'value' > 0,'value'+" "+"Good", 'value'<0,'value'+"Bad ",'value=0'+" "+"OKay") i am getting results like below ... See more...
Need to get a new line (\n) after the value, is it possible ? eval check=case( 'value' > 0,'value'+" "+"Good", 'value'<0,'value'+"Bad ",'value=0'+" "+"OKay") i am getting results like below 5 Good -2 Bad 0 Okay 1 Good 0 Okay   is it possible to get like below 5 Good -2 Bad 0 Okay 1 Good 0 Okay  
I would like to run 2 searches and calculate the difference between 2 fields and plot the result using timechart  I have tested both these searches independently and it works fine.  I am trying thi... See more...
I would like to run 2 searches and calculate the difference between 2 fields and plot the result using timechart  I have tested both these searches independently and it works fine.  I am trying this out     <search A>  | stats count  max(size) AS Users_Waiting | join [search <search B> | stats count as Daily_Users | streamstats sum(Daily_Users) as Cumulative_Users | timechart span=1d Cumulative_Users-Users_Waiting So basically, I want to take the count of first search which is Users_Waiting Take the count of 2nd search which is Cumulative_Users Draw a timechart with (Cumulative_Users - Users_Waiting) Is my approach correct ?
Hi Splunk community! I'm trying to index a CSV file where multiple values contains special characters such as æ, ø, å and | (vertical bar). The problem resides in characters such as these being ind... See more...
Hi Splunk community! I'm trying to index a CSV file where multiple values contains special characters such as æ, ø, å and | (vertical bar). The problem resides in characters such as these being indexed as '\xF8', '\xE6' and the like, as well as some strings having '?' inserted as the first and/or last character. When I open the file using Notepad++ and/or Sublime Text, the special characters appear correctly. Also, in Notepad++ it writes the encoding as: UTF-8-BOM. I also tried checking the encoding with a *nix machine using the file command to which I received the result: Filename.csv: UTF-8 Unicode (with BOM) text, with very long lines, with CRLF line terminators. I have tried configuring my props.conf  for the input with both: - CHARSET=AUTO - CHARTSET=UTF-8 But none of these seems to solve my issue... I have also tried exporting my CSV file as Unicode where I tried indexing with charset set to AUTO and UCS-2LE, which resulted in manyof lines being interpreted as chinese symbols.   Might someone have experienced and solved something similar?
Hi, im really struggling to split out events from my json at the moment. currently i only get a single event with multi values in or multiple events but no field names here is an example response f... See more...
Hi, im really struggling to split out events from my json at the moment. currently i only get a single event with multi values in or multiple events but no field names here is an example response from a request with the props.conf setup.   [ CMMS_ECP_TASKS_JSON1 ] BREAK_ONLY_BEFORE=(\{|\[\s+{) BREAK_ONLY_BEFORE_DATE=false CHARSET=AUTO LINE_BREAKER=(\}|\[\s+{) MAX_DAYS_HENCE=180 MAX_TIMESTAMP_LOOKAHEAD=180 MUST_BREAK_AFTER=(\}|\}\s+\]) NO_BINARY_CHECK=true SEDCMD-remove_footer=s/\]\s+\}//g SEDCMD-remove_header=s/(\{(\s+.*){3}\[)//g SEDCMD-remove_trailing_commas=s/\},/}/g SHOULD_LINEMERGE=true TIME_FORMAT=%Y-%m-%dT%H:%M:%S%%% TIME_PREFIX=\"dateFrom\"\: category=Custom disabled=false pulldown_type=true description=test json {"id":16,"dateFrom":"2021-03-03T00:00:00+01:00","priority":3},{"id":29,"dateFrom":"2021-01-28T00:00:00+01:00","priority":3},{"id":183,"dateFrom":"2021-01-19T00:00:00+01:00","priority":2},{"id":184,"dateFrom":"2021-01-19T00:00:00+01:00","priority":2},{"id":197,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":216,"dateFrom":"2021-03-22T00:00:00+01:00","priority":2},{"id":217,"dateFrom":"2021-03-22T00:00:00+01:00","priority":2},{"id":218,"dateFrom":"2021-03-22T00:00:00+01:00","priority":2},{"id":219,"dateFrom":"2021-03-22T00:00:00+01:00","priority":2},{"id":220,"dateFrom":"2021-03-22T00:00:00+01:00","priority":2},{"id":221,"dateFrom":"2021-03-22T00:00:00+01:00","priority":2},{"id":222,"dateFrom":"2021-03-22T00:00:00+01:00","priority":2},{"id":223,"dateFrom":"2021-03-22T00:00:00+01:00","priority":2},{"id":224,"dateFrom":"2021-03-22T00:00:00+01:00","priority":2},{"id":225,"dateFrom":"2021-03-22T00:00:00+01:00","priority":2},{"id":226,"dateFrom":"2021-04-05T00:00:00+02:00","priority":2},{"id":227,"dateFrom":"2021-04-05T00:00:00+02:00","priority":2},{"id":228,"dateFrom":"2021-04-05T00:00:00+02:00","priority":2},{"id":229,"dateFrom":"2021-04-05T00:00:00+02:00","priority":2},{"id":230,"dateFrom":"2021-04-19T00:00:00+02:00","priority":2},{"id":231,"dateFrom":"2021-04-19T00:00:00+02:00","priority":2},{"id":232,"dateFrom":"2021-04-19T00:00:00+02:00","priority":2},{"id":233,"dateFrom":"2021-04-19T00:00:00+02:00","priority":2},{"id":234,"dateFrom":"2021-04-19T00:00:00+02:00","priority":2},{"id":235,"dateFrom":"2021-04-19T00:00:00+02:00","priority":2},{"id":236,"dateFrom":"2021-04-19T00:00:00+02:00","priority":2},{"id":237,"dateFrom":"2021-04-19T00:00:00+02:00","priority":2},{"id":238,"dateFrom":"2021-04-19T00:00:00+02:00","priority":2},{"id":239,"dateFrom":"2021-04-19T00:00:00+02:00","priority":2},{"id":240,"dateFrom":"2021-04-19T00:00:00+02:00","priority":2},{"id":241,"dateFrom":"2021-04-19T00:00:00+02:00","priority":2},{"id":242,"dateFrom":"2021-04-19T00:00:00+02:00","priority":2},{"id":243,"dateFrom":"2021-04-19T00:00:00+02:00","priority":2},{"id":244,"dateFrom":"2021-04-19T00:00:00+02:00","priority":2},{"id":245,"dateFrom":"2021-04-19T00:00:00+02:00","priority":2},{"id":246,"dateFrom":"2021-04-19T00:00:00+02:00","priority":2},{"id":247,"dateFrom":"2021-05-03T00:00:00+02:00","priority":2},{"id":248,"dateFrom":"2021-05-03T00:00:00+02:00","priority":2},{"id":249,"dateFrom":"2021-05-17T00:00:00+02:00","priority":2},{"id":250,"dateFrom":"2021-05-17T00:00:00+02:00","priority":2},{"id":251,"dateFrom":"2021-05-17T00:00:00+02:00","priority":2},{"id":252,"dateFrom":"2021-05-17T00:00:00+02:00","priority":2},{"id":253,"dateFrom":"2021-05-17T00:00:00+02:00","priority":2},{"id":254,"dateFrom":"2021-05-17T00:00:00+02:00","priority":2},{"id":255,"dateFrom":"2021-05-17T00:00:00+02:00","priority":2},{"id":256,"dateFrom":"2021-05-17T00:00:00+02:00","priority":2},{"id":257,"dateFrom":"2021-05-17T00:00:00+02:00","priority":2},{"id":258,"dateFrom":"2021-05-31T00:00:00+02:00","priority":2},{"id":259,"dateFrom":"2021-05-31T00:00:00+02:00","priority":2},{"id":260,"dateFrom":"2021-05-31T00:00:00+02:00","priority":2},{"id":261,"dateFrom":"2021-05-31T00:00:00+02:00","priority":2},{"id":262,"dateFrom":"2021-05-31T00:00:00+02:00","priority":2},{"id":263,"dateFrom":"2021-05-31T00:00:00+02:00","priority":2},{"id":264,"dateFrom":"2021-05-31T00:00:00+02:00","priority":2},{"id":265,"dateFrom":"2021-05-31T00:00:00+02:00","priority":2},{"id":266,"dateFrom":"2021-06-14T00:00:00+02:00","priority":2},{"id":267,"dateFrom":"2021-06-14T00:00:00+02:00","priority":2},{"id":268,"dateFrom":"2021-06-14T00:00:00+02:00","priority":2},{"id":269,"dateFrom":"2021-06-14T00:00:00+02:00","priority":2},{"id":270,"dateFrom":"2021-06-14T00:00:00+02:00","priority":2},{"id":271,"dateFrom":"2021-06-14T00:00:00+02:00","priority":2},{"id":272,"dateFrom":"2021-06-14T00:00:00+02:00","priority":2},{"id":310,"dateFrom":"2021-06-28T00:00:00+02:00","priority":2},{"id":311,"dateFrom":"2021-06-28T00:00:00+02:00","priority":2},{"id":312,"dateFrom":"2021-06-28T00:00:00+02:00","priority":2},{"id":313,"dateFrom":"2021-06-28T00:00:00+02:00","priority":2},{"id":314,"dateFrom":"2021-06-28T00:00:00+02:00","priority":2},{"id":315,"dateFrom":"2021-06-28T00:00:00+02:00","priority":2},{"id":316,"dateFrom":"2021-06-28T00:00:00+02:00","priority":2},{"id":317,"dateFrom":"2021-06-28T00:00:00+02:00","priority":2},{"id":318,"dateFrom":"2021-06-28T00:00:00+02:00","priority":2},{"id":319,"dateFrom":"2021-06-28T00:00:00+02:00","priority":2},{"id":320,"dateFrom":"2021-06-28T00:00:00+02:00","priority":2},{"id":321,"dateFrom":"2021-06-28T00:00:00+02:00","priority":2},{"id":322,"dateFrom":"2021-06-28T00:00:00+02:00","priority":2},{"id":323,"dateFrom":"2021-06-28T00:00:00+02:00","priority":2},{"id":324,"dateFrom":"2021-06-28T00:00:00+02:00","priority":2},{"id":325,"dateFrom":"2021-06-28T00:00:00+02:00","priority":2},{"id":326,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":327,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":328,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":329,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":330,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":331,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":332,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":333,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":334,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":335,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":336,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":337,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":338,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":339,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":340,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":341,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":342,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":343,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":344,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":345,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":346,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":347,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":348,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":349,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":350,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2},{"id":351,"dateFrom":"1753-01-01T00:00:00+01:00","priority":2}    
I am getting output for docker services in linux server (docker ps -a    - linux command) . it will list the services. i was unable to spilt the fields properly. pls help with that.     CONTAINER... See more...
I am getting output for docker services in linux server (docker ps -a    - linux command) . it will list the services. i was unable to spilt the fields properly. pls help with that.     CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 93201914aec5 3.2.1.109:50/help-documentation-app:0.1.13 "httpd-foreground" 18 hours ago Up 18 hours 80/tcp PAContainer_helpservice.1.r3b5796b5jm8x5sxu4iec1br6 3601ae2ab0ea 1.22.7.19:500/downtime-app:0.6.10-8.1_SIM1 "/bin/sh -c 'node di…" 18 hours ago Up 18 hours 4200/tcp PAContainer_downtime-app.1.jvd57hdy01syxh6hon2r3f7dr f869d2cd58bf haoxy:1.8 "/bin/bash -c 'sourc…" 18 hours ago Up 18 hours PAContainer_haproxy.1.eol3crx27srigpgjrpw31te2j 430e3d3fe2c3 13.2.167.19:500/productionmetrics-app:0.5.17 "/bin/sh -c 'node di…" 18 hours ago Up 18 hours 4200/tcp    
I am sending json output files to splunk HEC using curl. But in the splunk data it is received as normal event but not in json format.  There are no stamps in the logs. I tried to send with both coll... See more...
I am sending json output files to splunk HEC using curl. But in the splunk data it is received as normal event but not in json format.  There are no stamps in the logs. I tried to send with both collector and collector/raw. Tried with both manually created json source type and _json. The data is getting in to splunk but not as json format. If I use  this back slash notation to all fields then I get in json format"{\"event\":\"testing\"}". Is there any work around i can do?
  I have a search that return the no. of transactions for each individual http result code and maps them to the respective result code name, like "2xx, 3xx maps to "Successful", 4xx maps to "Auth Fa... See more...
  I have a search that return the no. of transactions for each individual http result code and maps them to the respective result code name, like "2xx, 3xx maps to "Successful", 4xx maps to "Auth Failures" and 5xx maps to "Server Error"" in a lookup table.  I then use Power BI as the reporting tool via Splunk ODBC.  Power BI expects to have all three result code names in the search report every time, so if for instance, there are no failures for that period, then "Server Error" will not be in the search report. Which means that the Power BI report refresh fails. So I need to be able to have all 3 result code names returned regardless, and just fill in 0 as the number of transactions Part of my search shown below index search | lookup AppResultCode ResultCode as TransactionStatus OUTPUT Status | stats sum(NoOfTransactionsPerStatus) as NoOfTransactionsPerStatus sum(PercentTransactions) as PercentTransactions by Status | table Status NoOfTransactionsPerStatus PercentTransactions Status  NoOfTransactionsPerStatus PercentTransactions Authentication Failure 205 0.40 Successful 46,808 99.57   In the case above, the result doesn't include "Server Error". So am looking at adding it and fill in 0 for No. of transactions and percentage
When I initially install Splunk I am able to use it with no problems, but once my laptop turns on and off I am unable to to start Splunk up again.  I have tried to start Splunk in the command prompt... See more...
When I initially install Splunk I am able to use it with no problems, but once my laptop turns on and off I am unable to to start Splunk up again.  I have tried to start Splunk in the command prompt I get access denied when I try to start it without admin mode. I do not get any errors in the command line when using admin but once everything has finished it says its starting then the next and last line is 'Splunkd: Stopped' which means it starts up then instantly stops which doesn't give me access. The only fix I have currently is uninstalling and reinstalling Splunk.   
Please find the below single Log entry with multiple lines: >Validation results     Message 1) sucess: true     Message 2) sucess: false     Reason : All is an invalid log event type     Message... See more...
Please find the below single Log entry with multiple lines: >Validation results     Message 1) sucess: true     Message 2) sucess: false     Reason : All is an invalid log event type     Message 3) sucess: true  ...... Need rex to fetch only false with reason lines. Remaining needs to be ignored. Tried below rex not getting proper results. |Rex field=_raw "(?ms)(?<result>(.*)(?:true)"|table result  
Am using splunk-sdk to connect. splunklib.client  importing client  object = client.connect(host=host, port=8089,scheme="https", username="username, password="password") but getting ConnectionRefu... See more...
Am using splunk-sdk to connect. splunklib.client  importing client  object = client.connect(host=host, port=8089,scheme="https", username="username, password="password") but getting ConnectionRefusedError(111, 'Connection refused')      
Does any Splunk trooper has a short list of how to maintain Splunk Ent. & Splunk ES? I am looking for checking the heart beat, adding speed & performance & of course secure them. Adding efficiency fo... See more...
Does any Splunk trooper has a short list of how to maintain Splunk Ent. & Splunk ES? I am looking for checking the heart beat, adding speed & performance & of course secure them. Adding efficiency for the sake of users? Any back up & DR steps are very helpful to me. Thank u
i am looking difference of 2 events in one source file and those events  having same values and have to calculate difference. After append search got below values. _time Customer order 2021... See more...
i am looking difference of 2 events in one source file and those events  having same values and have to calculate difference. After append search got below values. _time Customer order 2021-03-25 08:30:28.485   123456   2021-03-25 03:53:57.201   123457   2021-03-25 04:43:50.254   123458   2021-03-25 12:59:31.464   123459   2021-03-25 08:30:28.485     123456 2021-03-25 03:53:57.201     123457 2021-03-25 04:48:50.254     123458 2021-03-25 12:59:31.464   123451   2021-03-25 22:59:31.464   123452   2021-03-25 04:59:31.464   123454   2021-03-25 04:59:33.464     123454 2021-03-25 05:00:31.464   123454   2021-03-25 05:05:31.464   123454   2021-03-25 05:10:35.464     123454   Output be like: Starttime EndTime ID (Start- END)Diff 2021-03-25 08:30:28.485   2021-03-25 08:30:28.485   123456   2021-03-25 03:53:57.201   2021-03-25 03:53:57.201   123457   2021-03-25 04:43:50.254   2021-03-25 04:48:50.254   123458   2021-03-25 04:59:31.464   2021-03-25 04:59:33.464   123454   2021-03-25 05:05:31.464   2021-03-25 05:10:35.464   123454     i am trying to take like below but all values are not coming properly and after calculating the difference i have to visualization greater than 10 mins chart. base search | table _time customer | rename customer as ID _time as Starttime | append [search base search |table _time order | rename order as ID _time as Endtime ] | table Starttime Endtime ID | stats count as new values(*) as * by ID | eval new1=mvzip(Starttime, Endtime) | mvexpand new1 | makemv delim="," new1 | eval Starttime1=mvindex(new1,0) | eval Endtime1=mvindex(new1,1) | table Senttime1 Endtime1 ID new  Thanks in advance.
Hello all, blacklist   blackout_end               blackout_start 1              1616756907                  1616756427 1              1616756907                  1616756427   I am trying to add ... See more...
Hello all, blacklist   blackout_end               blackout_start 1              1616756907                  1616756427 1              1616756907                  1616756427   I am trying to add the value for blacklist, where if the _time > blackout_start AND < blackout_end then blacklist=1 else 0.   Please help in getting the right answer.   Thanks.
Hello, Our chart looks as follows:   Is there any way to display not all values, but e.g. with the defined / dynamic time-grain? Perhaps only for the major x-axis values? The above is a bit to... See more...
Hello, Our chart looks as follows:   Is there any way to display not all values, but e.g. with the defined / dynamic time-grain? Perhaps only for the major x-axis values? The above is a bit too much ...  There is an option of min/max only in the chart settings, but this is however too less for us. Kind Regards, Kamil 
Hi everyone, I'm not able to submit a case and cannot see my cloud subscription in the account Customer Community (force.com) Insufficient Privileges You do not have the level of access nece... See more...
Hi everyone, I'm not able to submit a case and cannot see my cloud subscription in the account Customer Community (force.com) Insufficient Privileges You do not have the level of access necessary to perform the operation you requested. Please contact the owner of the record or your administrator if access is necessary. For more information, see Insufficient Privileges Errors.   Could you please contact me or review the user account I have and the associated cloud instance Thanks