All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

When I download dashboard by PDF or PNG, scrollbar is showing although it doesn't appear in the dashboard page like below picture. (dashboard type is Dashboard Studio)   How can I solve it?
Hello, I've below configuration for one index. maxTotalDataSizeMB = 333400 maxDataSize = auto_high_volume homePath = volume:hotwarm_cold/authentication/db coldPath = volume:hotwarm_cold/authenti... See more...
Hello, I've below configuration for one index. maxTotalDataSizeMB = 333400 maxDataSize = auto_high_volume homePath = volume:hotwarm_cold/authentication/db coldPath = volume:hotwarm_cold/authentication/colddb thawedPath = /splunk/data2/authentication/thaweddb coldToFrozenDir = /splunk/data2/authentication/frozendb tstatsHomePath = volume:hotwarm_cold/authentication/datamodel_summary homePath.maxDataSizeMB = 116700 coldPath.maxDataSizeMB = 216700 maxWarmDBCount = 4294967295 frozenTimePeriodInSecs = 2592000 repFactor = auto   Current log volume for this index is 3GB/day. Due to change in requirements, the log volume will increase to ~15GB/day and log retention period will change to 60 days. Could you tell how maxTotalDataSizeMB, homePath.maxDataSizeMB, coldPath.maxDataSizeMB and maxWarmDBCount will be calculated and how the calculation changes with data volume and retention period?
I have the following event from GCP pubsub: {     attributes: {    }    data: {       insertId: dbp95qcbup      logName: organizations/xxxxxxx/logs/cloudaudit.googleapis.com%2Fdata_ac... See more...
I have the following event from GCP pubsub: {     attributes: {    }    data: {       insertId: dbp95qcbup      logName: organizations/xxxxxxx/logs/cloudaudit.googleapis.com%2Fdata_access      protoPayload: { [+]      }      receiveTimestamp: 2021-08-02T05:52:58.861079027Z      resource: { [+]      }      severity: NOTICE      timestamp: 2021-08-02T04:01:48.076823Z    }    publish_time: 1627883579.307 }   Is there any way to use a forwarder to only send the contents of data{} to Splunk? I essentially want to strip off the outer parts of the JSON attributes{}, publishtime and have the event sent as the contents of the data{} field:" { "insertId": "dbp95qcbup", "logName": "organizations/xxxxxxx/logs/cloudaudit.googleapis.com%2Fdata_access", "protoPayload": {}, "receiveTimestamp": "2021-08-02T05:52:58.861079027Z", "resource": {}, "severity": "NOTICE", "timestamp": "2021-08-02T04:01:48.076823Z" }
Hello, So i am trying to create an alert based on logs from 2 different indexes. Basically what im trying to alert on is if a zip file/zip files from 1 index makes it to a 2nd different index, if it... See more...
Hello, So i am trying to create an alert based on logs from 2 different indexes. Basically what im trying to alert on is if a zip file/zip files from 1 index makes it to a 2nd different index, if it does not, i want it to alert. I have the following splunk query that combines both indexes but it's not completely accurate because when i run the indexes separately, im getting the zip files in question to appear in both indexes when in reality, i was expecting the zip files to appear in index 1 and not in index 2. Splunk query combining both indexes     index=index_1 OR index=index_2 sourcetype="index_1_logs" OR sourcetype="index_2_logs" "ftp.com" OR "External command has been executed" "*.zip" | eval results = if(match(index_1_zipfile_field,index_2_zipfile_field), "file made it through", "file did not make it through") | table results index_1_zipfile_field index_2_zipfile_field | search index_1_zipfile_field=* | dedup index_1_zipfile_field     Results show as shown below showing no results under index_2_zipfile_field giving the illusion that the zip files never made it through to index 2: results index_1_zipfile_field index_2_zipfile_field file did not make it through fgfbf-fgfgfg-wewsd-dfsf.zip   file did not make it through ghghh-rtrtr-trtrt-weqe.zip     ...but when i check index 2 and look up the results from the table above, i see the zip file made it through so i am unsure what im doing wrong here     index=index_2 sourcetype=index_2_logs "ftp.com" "*fgfbf-fgfgfg-wewsd-dfsf.zip*" | table index_2_zipfield_field | dedup index_2_zipfield_field     results: index_2_zipfield_field fgfbf-fgfgfg-wewsd-dfsf.zip ghghh-rtrtr-trtrt-weqe.zip   Hopefully i made sense. 
I would like to display a table and have the ability to give the data a category via an input field. So each row would have its own input field for the user to enter a value into it. For example the ... See more...
I would like to display a table and have the ability to give the data a category via an input field. So each row would have its own input field for the user to enter a value into it. For example the data could be something like this where the category column is the input field. I want to put the brand of the car in the category field and save it to a lookup table.  |Car|Category| |Prius | Toyota| |Mustang | Ford |
I could've sworn a month ago, these two worked fine together: https://splunkbase.splunk.com/app/3757/ https://splunkbase.splunk.com/app/4882/ For instance, the App's AAD User's section doesn't l... See more...
I could've sworn a month ago, these two worked fine together: https://splunkbase.splunk.com/app/3757/ https://splunkbase.splunk.com/app/4882/ For instance, the App's AAD User's section doesn't load. It's because fields like user_id and department don't exist. Now I've checked and user_id doesn't exist: https://docs.microsoft.com/en-us/graph/api/resources/user?view=graph-rest-1.0#properties It also seems it's using no parameters, so it will pull in the default implementation (that doesn't include the department field). input_module_MS_AAD_user.py, Line 37 needs to be modified to add the $select parameter. The AAD Users section is just one of many that seem empty/don't work.
Hi Splunkers,   I am having the below issue could you please help me to solve the issue. Here is my event 08-02-2021 20:46:39.852 +0000 WARN DateParserVerbose - Accepted time (Mon Aug 2 20:10:36 ... See more...
Hi Splunkers,   I am having the below issue could you please help me to solve the issue. Here is my event 08-02-2021 20:46:39.852 +0000 WARN DateParserVerbose - Accepted time (Mon Aug 2 20:10:36 2021) is suspiciously far away from the previous event's time (Tue Aug 3 00:18:26 2021), but still accepted because it was extracted by the same pattern. TIME 8/2/21 10:35:55.489 AM EVENT 08-02-2021 10:35:55.489 -0400 WARN DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to timestamp of previous event (Mon Aug 2 10:35:53 2021). Here is my props.conf  [azure:prod] DATETIME_CONFIG = CURRENT TRUNCATE = 10000 MAX_TIMESTAMP_LOOKAHEAD = 128
Hey everyone, As far as I can tell in the current version of dashboard studio there is no way to export to PNG or PDF that does not include the input menus at the top. (Time, multiselects, etc.) Is ... See more...
Hey everyone, As far as I can tell in the current version of dashboard studio there is no way to export to PNG or PDF that does not include the input menus at the top. (Time, multiselects, etc.) Is this correct, or is there something I am missing? Thanks in advance for your help.
Hi Splunkers. Could anyone give me some info on what kind of attacks I can work on based on Linux and Windows logs. I've already working on brute force attacks but my team wants me to work on othe... See more...
Hi Splunkers. Could anyone give me some info on what kind of attacks I can work on based on Linux and Windows logs. I've already working on brute force attacks but my team wants me to work on other possible attacks as well. Please share some knowledge. TIA.
Whenever I try to open Splunk from a masked URL (a URL that points to a URL but shows the first URL in the browser search bar) it says "To protect your security, xxxxxxxxxxxx.com will not allow Firef... See more...
Whenever I try to open Splunk from a masked URL (a URL that points to a URL but shows the first URL in the browser search bar) it says "To protect your security, xxxxxxxxxxxx.com will not allow Firefox to display the page if another site has embedded it. To see this page, you need to open it in a new window." is there any way I can allow this to happen?
Is there a way to export each raw source files?  Example of my search criteria:  index="con1_batch" source="*/PB00E5*/log/*.log" Top 10 Values Count % /con7/var/batch/PB00E533/log/PB00E533.Batc... See more...
Is there a way to export each raw source files?  Example of my search criteria:  index="con1_batch" source="*/PB00E5*/log/*.log" Top 10 Values Count % /con7/var/batch/PB00E533/log/PB00E533.BatchEdbc.20210718031834.log 29,154 1.92% /con7/var/batch/PB00E517/log/PB00E517.BatchEdbc.20210718031918.log 28,679 1.889% /con7/var/batch/PB00E587/log/PB00E587.BatchEdbc.20210718031918.log 28,667 1.888% /con7/var/batch/PB00E551/log/PB00E551.BatchEdbc.20210718031936.log 28,643 1.887% /con7/var/batch/PB00E583/log/PB00E583.BatchEdbc.20210718031849.log 28,512 1.878% /con7/var/batch/PB00E530/log/PB00E530.BatchEdbc.20210718031841.log 28,433 1.873% /con7/var/batch/PB00E590/log/PB00E590.BatchEdbc.20210718032104.log 28,330 1.866% /con7/var/batch/PB00E548/log/PB00E548.BatchEdbc.20210718031953.log 28,157 1.855% /con7/var/batch/PB00E550/log/PB00E550.BatchEdbc.20210718031907.log 28,114 1.852% /con7/var/batch/PB00E584/log/PB00E584.BatchEdbc.20210718031838.log 28,061 1.848% ... There are 100+ source files. Can I download or export all the individual source file?
Hi all, I have been using Splunk for about 2 days, so am VERY new.  I'm trying to get a utilization number for endpoint analytics. What I would like is a query that can tell me the Kbps (formula be... See more...
Hi all, I have been using Splunk for about 2 days, so am VERY new.  I'm trying to get a utilization number for endpoint analytics. What I would like is a query that can tell me the Kbps (formula below) per User each 5 minutes for whatever timespan dropdown I pick.  I have tried numerous ways to do it and have had no luck.  Is there a way I can get a table maybe?  that will show me the count (or maybe dc) of users each 5 minutes and the avg Kbps per user?   The query below gives me a timechart by src_IP, but it doesn't look right to me, so I'd like a way to verify (hence the table).    index="myIndex"  source="tcp:xxxx" | eval Kbps=(((cs_bytes*8)/1000)/300) | timechart span=5m avg(Kbps) by src_ip   I've tried this to get the table, but it doesn't work - it gives me zero matches   index="myIndex"  source="tcp:xxxx" | bin _time span=5m | dedup src_ip as SrcIP | streamstats sum(cs_bytes) as Bytes by SrcIP | eval Kbps=(((Bytes*8)/1000)/300) | table SrcIP Bytes Kbps   Any guidance is much appreciated!
Looking for a process if possible to send cloud server log data to ON Premise Splunk indexers. Searched but nothing solid SplunkCloudLogs - > internet  -> On premise Splunk indexers    
Here's my query and I want to calculate the difference between count (_raw) each month . It would be a running column so next month it would include the August month and then September and so on.... See more...
Here's my query and I want to calculate the difference between count (_raw) each month . It would be a running column so next month it would include the August month and then September and so on.. Can anyone please provide me solution to it ?
Dear Splunkers,  If I could get an answer on how do I find which HEC token is causing authentication failures (num_of_auth_failures=1) from _introspection logs, will very much helpful. I'm using be... See more...
Dear Splunkers,  If I could get an answer on how do I find which HEC token is causing authentication failures (num_of_auth_failures=1) from _introspection logs, will very much helpful. I'm using below query to find the errors, but how do I pin point which is causing the issue? index=_introspection component=TERM(HttpEventCollector) "data.series"=TERM(http_event_collector) (data.num_of_auth_failures=1 OR data.num_of_requests_to_disabled_token=1 OR data.num_of_requests_to_incorrect_url=1) Thanks in Advance. 
How to write a query to populate total users per day, total unique users per day, and max user login for a min for that day.    My query : Index =ABC | timechart span=1m count as total_user dc(dat... See more...
How to write a query to populate total users per day, total unique users per day, and max user login for a min for that day.    My query : Index =ABC | timechart span=1m count as total_user dc(data) as unique_users | timechart max(unique_users) as max_count_per_min |appendcols [search index =abc| bin span=1d _time| stats count as total_user dc(data) as unique_users by time]|table _time total_users unique_users max_count_per_min   I need a simpler query having fulfilled all requirements. 
We are trying to setup a new cluster and move from Splunk single site to multisite. Could someone help with all the ports that needs to be allowed between the two sites for this to function properly?
Hi All, I have installed Splunk upgrade readiness app on linux box recently however when i am trying to access it, i am getting the following error :- Error reading progress for user: XXXX on host:... See more...
Hi All, I have installed Splunk upgrade readiness app on linux box recently however when i am trying to access it, i am getting the following error :- Error reading progress for user: XXXX on host:   Following are the logs :- 2021-08-02 14:06:45,437 INFO 140057984116544 - Handling a request 2021-08-02 14:06:45,437 INFO 140057984116544 - Executing function, name=get_read_progress 2021-08-02 14:06:45,513 ERROR 140057984116544 - Error reading progress for user:   I tried running /bin/splunk createssl server-cert -d . -n server as this is mentioned in one of the answers but it didn't work.   Can you please suggest, how can we resolve this issue.      
I was able to set indexes dynamically in inputs.conf based off the source path folder name however, it seems like its not working in Splunk cloud. I have tried to upload a app with the props and tran... See more...
I was able to set indexes dynamically in inputs.conf based off the source path folder name however, it seems like its not working in Splunk cloud. I have tried to upload a app with the props and transforms and also tried to use a HWF as well. hoping someone out there might be able to help.    this is basically what my conf files look like- -props- [source::\\fileshare\\folder\\...] TRANSFORMS=send_to_index_by_source   -transforms- [send_to_index_by_source] SOURCE_KEY=_MetaData:Source REGEX=\\\wfileshare\\\wfolder\\(\w+) DEST_KEY=_MetaData:Index FORMAT=$1 -inputs- [monitor://\\fileshare\folder\...\test15.txt] disabled=false recursive=true sourcetype=test15  
Hello, I'm trying to utilize the TA-QualysCloudPlatform app but running into a couple of issues.   1) When I go to open the splunk GUI, click on Data Inputs, Qualys Cloud Platform, Add New, >> ... See more...
Hello, I'm trying to utilize the TA-QualysCloudPlatform app but running into a couple of issues.   1) When I go to open the splunk GUI, click on Data Inputs, Qualys Cloud Platform, Add New, >> Where does that data input get stored? I don't see it in either the default or the local inputs.conf of the app, so it must be storing it elsewhere. 2) When I get the app setup and attempt an API call, the internal Qualys log shows No Credentials Found, Cannot Continue. This is after setting up the app, entering the credentials in the GUI, Saving, and Restarting Splunk. Any ideas?   Thanks for your help.