All Topics

Top

All Topics

I'm following the Line Chart example in the Studio Dashboard app in Splunk index=_internal _sourcetype IN ( splunk_web_access, splunkd_access) | timechart count by _sourcetype "viz_ZrQCy9wp": { ... See more...
I'm following the Line Chart example in the Studio Dashboard app in Splunk index=_internal _sourcetype IN ( splunk_web_access, splunkd_access) | timechart count by _sourcetype "viz_ZrQCy9wp": { "type": "viz.line", "options": { "fieldColours": { "splunk_web_access": "#FF0000", "splunkd_access": "#0000FF" } },   I cannot get it to set the field name colours in a timechart.  I'm having an issue with that on other searches, as well as the 2nd y axis settings not appearing to work. Has something changed with how Splunk handles charts in Studio Dashboard? Thanks
Hi, We have a custom search that should alert when a critical host, that we have defined in the search, is missing. The issue we're having is that we haven't been alerted on some of the hosts not ha... See more...
Hi, We have a custom search that should alert when a critical host, that we have defined in the search, is missing. The issue we're having is that we haven't been alerted on some of the hosts not having logs because the last time they had any logs was an age ago. When I change earliest=-1d to earliest=-1y the hosts I want appear but the search takes a longer time.  Is there a way to make it so for every host value specified, a stats line is created where I can fillnull the fields as appropriate? Here is the search: | tstats prestats=true count where index="*", (host="host01" OR host="host02" OR host="host_01" OR host="host_02") earliest=-1d latest=now by index, host, _time span=1h | eval period=if(_time>relative_time(now(),"-1d"),"current","last") | chart count over host by period | eval missing=if(last>0 AND (isnull(current) OR current=0), 1, 0) | where missing=1 | sort - current, missing | rename current as still_logging, last as old_logs, missing as is_missing
I have quiz values for 10 quizzes. Each quiz is a column and the values are 0-100 in each row. I am trying to just calculate the the average of each column and have that as a point on the line chart... See more...
I have quiz values for 10 quizzes. Each quiz is a column and the values are 0-100 in each row. I am trying to just calculate the the average of each column and have that as a point on the line chart with 0-100 as the y-axis and each quiz as an x-axis column.  For example. | chart avg(quiz_01) AS "Quiz 1 Average", avg(quiz_02) AS "Quiz 2 Average", avg(quiz_03) AS "Quiz 3 Average" But all of the points end up in the same column in the line chart. Thanks  
I have a field with values like below (a) (a,b) (c) (a,c)   I am trying to parse these values, and get stats like below    a 3 b 1 c 2
I've recently updated the Splunk_TA_windows from version 4.1.8 to version 8.12. As I went through the documentation I noticed there was a new setting under inputs.conf that mentioned to set "renderXm... See more...
I've recently updated the Splunk_TA_windows from version 4.1.8 to version 8.12. As I went through the documentation I noticed there was a new setting under inputs.conf that mentioned to set "renderXml=0" in order to keep WinEventLogs in "classic" or "friendly" mode.  After making that update to the TA's deployed to all UF's and to the Indexer Cluster I'm now getting the same event under both formats.  e.g., If I have an EventCode=4624 for a specific host, I run a search and I can see the same event (different format) with sources: XmlWinEventLog:Security AND WinEventLog:Security I only want the WinEventLogs in classic mode, don't need the XML at the moment.  If I set renderXml=true I ONLY get XmlWinEventlogs. Some Details: - I ran btool for inputs on a dev UF and I can see that renderXml=false - I ran btool for inputs in one indexer and I can see that renderXml=false - Splunk_TA_windows version 8.1.2  My inputs.conf file   [WinEventLog://Security] disabled = 0 renderXml = false       Does anyone have any idea why I'm still seeing both formats?   
All, I wanted to take the list of index hosts List that currently being index by splunk and then compare that list to a static List. And then showing what item in the static List is not being index.... See more...
All, I wanted to take the list of index hosts List that currently being index by splunk and then compare that list to a static List. And then showing what item in the static List is not being index. How would I approach this? 
Hi, i am currently working in a search to filter values based on a lookup table and i am having a difficult time with the backslash character ("\"). The search is the following:   index=<index> so... See more...
Hi, i am currently working in a search to filter values based on a lookup table and i am having a difficult time with the backslash character ("\"). The search is the following:   index=<index> source=source<source> access IN ([| inputlookup lookup_accesses.csv | mvcombine delim="\",\"" Accesses | nomv Accesses | eval Accesses = "\"" + Accesses + "\"" | return $Accesses]) | fields <fields>   The problem occurs when the data inside contains the backslash char ("\"), in this case it does not work and returns zero results. Otherwise if the data inside the lookup doesn't contain the backslash char it works fine. This lookup fields may contain file names and directories and we are trying to make it work for both cases. Any help will be appreciated. Regards. Javier.
Hello, My company has a sidebar that is consistent throughout all our other internal applications, and we were trying to get that same sidebar on our Splunk instance as well. We wanted to add a cust... See more...
Hello, My company has a sidebar that is consistent throughout all our other internal applications, and we were trying to get that same sidebar on our Splunk instance as well. We wanted to add a custom sidebar, with links to other pages from there (sort of looks like the bootstrap 5 sidebar https://getbootstrap.com/docs/5.0/examples/sidebars/), within all the apps (home/search/etc). Is there anyway this can be done? When I looked elsewhere for tips on styling the UI, most of them were for styling the dashboards themselves. Is there also a way to include a custom javascript file within that UI as well? I am new to splunk, so maybe I am not looking in the right areas.
Hello, Roxio Secure Burn stores a history of its burn logs to C:\ProgramData\Roxio Log Files I have a report set up in SPLUNK to monitor that location on all computers that have Roxio installed.  s... See more...
Hello, Roxio Secure Burn stores a history of its burn logs to C:\ProgramData\Roxio Log Files I have a report set up in SPLUNK to monitor that location on all computers that have Roxio installed.  source= "c:\\ProgramData\\Roxio Log Files\\*"  Most of the systems show up fine. However several system have files saved in that location on the local system that do not show in the SPLUNK report.  Those systems are visible for other reports, such as failed logons, reboots, etc. But nothing shows up for the report above. The permissions for that location are the same as systems that DO show up in Roxio.  I have adjusted the time to include the past 6 months, year, and all time. Nothing shows in the SPLUNK results, however I can see logs in the actual directory on the system itself.  Any ideas?  
I have a lookup table with CVE listed which I dont want to be in our report so we have made the lookup table and adding it to the search  | table Severity, "EC2 Instance ID", "EC2 Instance Name", "R... See more...
I have a lookup table with CVE listed which I dont want to be in our report so we have made the lookup table and adding it to the search  | table Severity, "EC2 Instance ID", "EC2 Instance Name", "Rules Package", Rule, CreatedAt, Links, title, description, recommendation, numericSeverity | lookup ignore_cve.csv But I am getting an error that " Error in 'lookup' command: Must specify one or more lookup fields." So Do I have add something else after ignore_cve.csv   Kindly help.@lookup
Hi, I am having difficulty in extracting key=value pairs from one of the auto extracted field. The problem is that, this field may contain just a text value but also could contain multiple key=valu... See more...
Hi, I am having difficulty in extracting key=value pairs from one of the auto extracted field. The problem is that, this field may contain just a text value but also could contain multiple key=value pairs in it, so whenever there are multiple key=value pairs in the event then I am not getting the desired results. Following are some of  my _raw events  - 2021-08-10T11:35:00.505 ip=10.1.10.10 id=1 event="passed" model="t1" conn="connmsg=\"controller.conn_download::message.clean\", file=\"/home/folder1/filename_8555c5s.ext\", time=\"21:22:02\", day=\"08/24/2021\""  2021-08-10T11:35:00.508 ip=10.1.10.10 id=1 event="running" model="t1" conn="connmsg=\"model.log::option.event.view.log_view_conn, connname=\"model.log::option.event.view.log_view_conn_name\", user=\"xyz\", remote_conn=10.23.55.54, auth_conn=\"Base\"" 2021-08-10T11:35:00.515 ip=10.1.10.10 id=1 event="failed" model="t1" conn="Failed to connect to the file for \"file_name\"" 2021-08-10T11:35:00.890 ip=10.1.10.10 id=1 event="extracting" model="t1" conn="connmsg=\"model.log::option.event.view.logout.message\", user=, job_id=65, report_name=",  path=\"{\"type\":1,\"appIds\":\"\",\"path\":\"2021-08-10T11:35:00+00:00_2021-08-10T12:35:00+00:00\\/ip_initiate\\/10.1.120.11\\/http_code\\/200\",\"restrict\":null}\"" 2021-08-10T11:36:00.090 ip=10.1.10.10 id=1 event="extracting" model="t1" conn="connmsg=\"model.log::option.event.view.audit.message, user=\"qic\\abc_pqr\, reason_msg=\"component.auth::message:unknown_user\", path=/abc/flows/timespan/2021-08-10T11:35:00+00:00_2021-08-10T12:35:00+00:00/ip_initiate/10.101.10.20/data.ext" 2021-08-10T11:36:00.380 ip=10.1.10.10 id=1 event="triggered" model="t1" conn="Rule 'Conn Web Service' was triggered by Indicator:'Conn Web Service'" 2021-08-10T11:36:00.880 ip=10.1.10.10 id=1 event="triggered" model="t1" conn="connmsg=\"model.log::option.event.report.finished\", user=, job_id=65, report_name=",  path=\"{\"type\":1,\"namespace\":\"flows\",\"appIds\":\"10,11,12\",\"path_bar\":\"[\\\"ip_initiate=10.1.120.11\\\"]\",\"2021-08-10T11:35:00+00:00_2021-08-10T12:35:00+00:00\\/ip_initiate\\/10.1.120.11\\/http_code\\/200\",\"restrict\":null}\"" The field which I am facing issue is "conn" field and I want data to be extracted in conn field in somewhat below manner -   conn \"controller.conn_download::message.clean\" model.log::option.event.view.log_view_conn Failed to connect to the file for \"file_name\" \"model.log::option.event.view.logout.message\" \"model.log::option.event.view.audit.message\" "Rule 'Conn Web Service' was triggered by Indicator:'Conn Web Service'" but currently its just extracting the next value coming after conn= ,so basically current data in my conn field based on the above raw events looks like - conn connmsg=\ connmsg=\ Failed to connect to the file for \"file_name\" connmsg=\ connmsg=\ Rule 'Conn Web Service' was triggered by Indicator:'Conn Web Service' The "conn" field might contain even more key value pairs , so also wanted to know if there is some dynamic way to capture if any new key value pair pops in conn field other than those specified ? Also along with that, the other key value pairs in conn field is sometimes getting auto extracted and sometime it isn't. I am trying to write Search time field extraction using props and transforms but no luck so far in getting what I want, can someone please help ? Thanks in Advance.
 Has anyone  ever taken a table output in the search and have it create an attachment to the Service Now ticket being created
Tried opening the "Okta Identity Cloud Add-on for Splunk" from UI to check the configuration and settings, but it keeps showing that it's loading, but it doesn't actually load. I checked the "ta_okta... See more...
Tried opening the "Okta Identity Cloud Add-on for Splunk" from UI to check the configuration and settings, but it keeps showing that it's loading, but it doesn't actually load. I checked the "ta_okta_identity_cloud_for_splunk_okta_identity_cloud.log" file from CLI and here is what it returned. >>>>>tail -f ta_okta_identity_cloud_for_splunk_okta_identity_cloud.log<<<<< File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/solnlib/packages/requests/api.py", line 53, in request return session.request(method=method, url=url, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/solnlib/packages/requests/sessions.py", line 468, in request resp = self.send(prep, **send_kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/solnlib/packages/requests/sessions.py", line 576, in send r = adapter.send(request, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/solnlib/packages/requests/adapters.py", line 447, in send raise SSLError(e, request=request) SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:741) 2021-09-09 11:55:38,725 INFO pid=25100 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1 2021-09-09 11:55:38,733 ERROR pid=25100 tid=MainThread file=splunk_rest_client.py:request:144 | Failed to issue http request=GET to url=https://127.0.0.1:8089/servicesNS/nobody/TA-Okta_Identity_Cloud_for_Splunk/TA_Okta_Identity_Cloud_for_Splunk_account?output_mode=json&--cred--=1&count=0, error=Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/solnlib/splunk_rest_client.py", line 140, in request verify=verify, proxies=proxies, cert=cert, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/solnlib/packages/requests/api.py", line 53, in request return session.request(method=method, url=url, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/solnlib/packages/requests/sessions.py", line 468, in request resp = self.send(prep, **send_kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/solnlib/packages/requests/sessions.py", line 576, in send r = adapter.send(request, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/solnlib/packages/requests/adapters.py", line 447, in send raise SSLError(e, request=request) SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:741)  
Hello, We are encountering an issue after a data migration. The data migration was needed to increase the disk performances. Basically we moved all the Splunk data from disk1 to disk2 on a sing... See more...
Hello, We are encountering an issue after a data migration. The data migration was needed to increase the disk performances. Basically we moved all the Splunk data from disk1 to disk2 on a single Splunk Indexer instance belonging to a Multi-Site Splunk Indexer Cluster. The procedure was: With Splunk running, rsync the data from disk1 to disk2 Once rsync finished stop Splunk Put the Cluster in maintenance mode Perform again the rsync to copy the remaining delta from disk1 to disk2 Remove disk1 and point Splunk to disk2 Restart Splunk   Once we have restarted Splunk some buckets have been marked as DISABLED. This is due because once at point 2 we have stopped Splunk the hot buckets have rolled to warm (on disk1). Therefore during the rsync at point 4 those freshly rolled warm buckets of disk1 have been copied to disk2 where buckets hot with the same ID were present. Due to this the conflict happened and the buckets were marked as DISABLED.   So basically now DISABLED buckets could have more data (but not all the data) than the non disabled ones. Furthermore non disabled ones have been replicated within the cluster. Do you think there is a way to recover those DISABLED buckets so that they will be searchable again? I see here: https://community.splunk.com/t5/Deployment-Architecture/What-is-the-naming-convention-behind-the-db-buckets/m-p/9983 https://docs.splunk.com/Documentation/Splunk/latest/Indexer/HowSplunkstoresindexes#Bucket_naming_conventions it seems the solution could be if I well understood (with Splunk instance not running) move the data from for example DISABLED-db_1631215114_1631070671_448_3C08D28D-299A-448E-BD23-C0E9B071E694 to db_1631215114_1631070671_herechangethebucketID_3C08D28D-299A-448E-BD23-C0E9B071E694 If so: how many digit are allowed for the bucketID? Does someone has any experience on doing so? Once done the new buckets will be replicated within the cluster?   Here is what I find in the internal logs checking for one of the affected bucket: Query:   index=_internal *1631215114_1631070671_448_3C08D28D-299A-448E-BD23-C0E9B071E694 source!="/opt/splunk/var/log/splunk/splunkd_ui_access.log" source!="/opt/splunk/var/log/splunk/remote_searches.log" | sort -_time   Result:   09-09-2021 14:18:41.758 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_internal~448~3C08D28D-299A-448E-BD23-C0E9B071E694 idx=_internal from=hot_v1_448 to=db_1631215114_1631070671_448_3C08D28D-299A-448E-BD23-C0E9B071E694 size=10475446272 caller=size_exceeded _maxHotBucketSize=10737418240 (10240MB,10GB), bucketSize=10878386176 (10374MB,10GB) 09-09-2021 14:18:41.767 +0200 INFO S2SFileReceiver - event=rename bid=_internal~448~3C08D28D-299A-448E-BD23-C0E9B071E694 from=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/448_3C08D28D-299A-448E-BD23-C0E9B071E694 to=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/rb_1631215114_1631070671_448_3C08D28D-299A-448E-BD23-C0E9B071E694 09-09-2021 14:18:41.795 +0200 INFO S2SFileReceiver - event=rename bid=_internal~448~3C08D28D-299A-448E-BD23-C0E9B071E694 from=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/448_3C08D28D-299A-448E-BD23-C0E9B071E694 to=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/rb_1631215114_1631070671_448_3C08D28D-299A-448E-BD23-C0E9B071E694 09-09-2021 14:18:41.817 +0200 INFO S2SFileReceiver - event=rename bid=_internal~448~3C08D28D-299A-448E-BD23-C0E9B071E694 from=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/448_3C08D28D-299A-448E-BD23-C0E9B071E694 to=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/rb_1631215114_1631070671_448_3C08D28D-299A-448E-BD23-C0E9B071E694 09-09-2021 15:53:19.476 +0200 INFO DatabaseDirectoryManager - Dealing with the conflict bucket="/products/data/xxxxxxxxx/splunk/db/_internaldb/db/db_1631215114_1631070671_448_3C08D28D-299A-448E-BD23-C0E9B071E694"... 09-09-2021 15:53:19.477 +0200 ERROR DatabaseDirectoryManager - Detecting bucket ID conflicts: idx=_internal, bid=_internal~448~3C08D28D-299A-448E-BD23-C0E9B071E694, path1=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/hot_v1_448, path2=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/db_1631215114_1631070671_448_3C08D28D-299A-448E-BD23-C0E9B071E694. Temporally resolved by disabling the bucket: path=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/DISABLED-db_1631215114_1631070671_448_3C08D28D-299A-448E-BD23-C0E9B071E694. Please check this disabled bucket for manual removal.\nDetecting bucket ID conflicts: idx=_internal, bid=_internal~595~E17D5544-7169-4D32-B7C0-3FD972956D4B, path1=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/595_E17D5544-7169-4D32-B7C0-3FD972956D4B, path2=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/rb_1628215818_1627992904_595_E17D5544-7169-4D32-B7C0-3FD972956D4B. Temporally resolved by disabling the bucket: path=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/DISABLED-rb_1628215818_1627992904_595_E17D5544-7169-4D32-B7C0-3FD972956D4B. Please check this disabled bucket for manual removal.\nDetecting bucket ID conflicts: idx=_internal, bid=_internal~591~12531CC6-0C79-473A-859E-9ADF617941A2, path1=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/591_12531CC6-0C79-473A-859E-9ADF617941A2, path2=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/rb_1628647804_1628215848_591_12531CC6-0C79-473A-859E-9ADF617941A2. Temporally resolved by disabling the bucket: path=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/DISABLED-rb_1628647804_1628215848_591_12531CC6-0C79-473A-859E-9ADF617941A2. Please check this disabled bucket for manual removal.\nDetecting bucket ID conflicts: idx=_internal, bid=_internal~606~1D0FBF00-A5FF-4767-A044-F3C6F01BAD84, path1=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/606_1D0FBF00-A5FF-4767-A044-F3C6F01BAD84, path2=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/rb_1630204023_1629772040_606_1D0FBF00-A5FF-4767-A044-F3C6F01BAD84. Temporally resolved by disabling the bucket: path=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/DISABLED-rb_1630204023_1629772040_606_1D0FBF00-A5FF-4767-A044-F3C6F01BAD84. Please check this disabled bucket for manual removal.\nDetecting bucket ID conflicts: idx=_internal, bid=_internal~603~1D0FBF00-A5FF-4767-A044-F3C6F01BAD84, path1=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/603_1D0FBF00-A5FF-4767-A044-F3C6F01BAD84, path2=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/rb_1631172918_1631063432_603_1D0FBF00-A5FF-4767-A044-F3C6F01BAD84. Temporally resolved by disabling the bucket: path=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/DISABLED-rb_1631172918_1631063432_603_1D0FBF00-A5FF-4767-A044-F3C6F01BAD84. Please check this disabled bucket for manual removal.\nDetecting bucket ID conflicts: idx=_internal, bid=_internal~436~2E5A3717-4C0C-487C-87D3-A7127B3DB42D, path1=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/436_2E5A3717-4C0C-487C-87D3-A7127B3DB42D, path2=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/rb_1631196626_1631073242_436_2E5A3717-4C0C-487C-87D3-A7127B3DB42D. Temporally resolved by disabling the bucket: path=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/DISABLED-rb_1631196626_1631073242_436_2E5A3717-4C0C-487C-87D3-A7127B3DB42D. Please check this disabled bucket for manual removal.\nDetecting bucket ID conflicts: idx=_internal, bid=_internal~589~12531CC6-0C79-473A-859E-9ADF617941A2, path1=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/589_12531CC6-0C79-473A-859E-9ADF617941A2, path2=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/rb_1631199124_1630935298_589_12531CC6-0C79-473A-859E-9ADF617941A2. Temporally resolved by disabling the bucket: path=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/DISABLED-rb_1631199124_1630935298_589_12531CC6-0C79-473A-859E-9ADF617941A2. Please check this disabled bucket for manual removal.\nDetecting bucket ID conflicts: idx=_internal, bid=_internal~594~E17D5544-7169-4D32-B7C0-3FD972956D4B, path1=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/594_E17D5544-7169-4D32-B7C0-3FD972956D4B, path2=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/rb_1631215283_1630935291_594_E17D5544-7169-4D32-B7C0-3FD972956D4B. Temporally resolved by disabling the bucket: path=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/DISABLED-rb_1631215283_1630935291_594_E17D5544-7169-4D32-B7C0-3FD972956D4B. Please check this disabled bucket for manual removal.\n   Thanks a lot, Edoardo
We are dealing with an issue where some of our users have a very short timeout in Splunk. We are working with Splunk to come up with better timeouts but in the meantime we need a way to stop dashboar... See more...
We are dealing with an issue where some of our users have a very short timeout in Splunk. We are working with Splunk to come up with better timeouts but in the meantime we need a way to stop dashboard panels form loading when a user times out to prevent them seeing partial results.  I have tried the normal tokens that are used to hide a panel until a search is done but these "stopped" searches report as done (they just have partial results).   I have seen old solutions that used "finalized" but that has been deprecated.  ('ve tried cancelled, fail and error as well, no luck) Does anyone have any idea how else I can stop these panels from loading when a search is "stopped" because of a timeout?  
Hello, I have issues to write PROPS configuration file for following csv file (please see screenshot below for sample data) with No Header on it. Five columns showed in the screenshot are all values... See more...
Hello, I have issues to write PROPS configuration file for following csv file (please see screenshot below for sample data) with No Header on it. Five columns showed in the screenshot are all values. Value of First Column also included below for better visibility.  Any help will be highly appreciated. Thank you so much. Screenshot   Value of First Column sma2aa_L_0__20210906-194605_16305.html@^@^2020-09-10@^@^04:51:43@^@^sma2aa@^@^insert into "nTABLE_MIGRATION_INFO_current"( "user_name"   [csv] SHOULD_LINEMERGE=FALSE TIME_PREFIX=? TIME_FORMAT=? TIMESTAMP_FIELDS=? HEADER_FIELD_LINE_NUMBER=? INDEXED_EXTRACTIONS=csv  
Hello everyone, we are trying to get Azure Information Protection data into Splunk, specifically, we need to get insights on who are the users that uses Azure Information Protection to classify file... See more...
Hello everyone, we are trying to get Azure Information Protection data into Splunk, specifically, we need to get insights on who are the users that uses Azure Information Protection to classify files. I really don't have any experience with Azure and Microsoft Cloud, and from my researches I've found some add-ons, but don't know if they're useful or not. Splunk add-on for Microsoft Graph API: this seems only importing security alerts related to (also) AIP. Splunk add-on for Microsoft Cloud Services: it allows to pull data directly from Azure assets, but don't know if AIP data are in the perimeter. Microsoft Azure add-on for Splunk: it allows collection of a large perimeter of informations, but same as the previous point. Can you help me to clear my mind on this variety of add-ons and see if one of those add-ons (or maybe another one that i forgot to mention) is suitable for our needs?   Thank you so much.
Can a Splunk Enterprise Server Work as a Forwarder to Splunk Cloud?    
Are the forwarders in Splunk Ent. the same in ES? I ask because I get " missing FWs by MC in both & the numbers are not the same! Please shed some light on this. My understanding is that the FWs work... See more...
Are the forwarders in Splunk Ent. the same in ES? I ask because I get " missing FWs by MC in both & the numbers are not the same! Please shed some light on this. My understanding is that the FWs working with Splunk Ent. are the same working for the ES? Thank u for your help in advance.
Hello, I'm trying to implement APM Serverless for AWS Lambda in a Python function. The function is deployed via a container image, so the extension is built as a layer in the Dockerfile. Firstly, i... See more...
Hello, I'm trying to implement APM Serverless for AWS Lambda in a Python function. The function is deployed via a container image, so the extension is built as a layer in the Dockerfile. Firstly, in case someone is trying to auto instrument this process; The extension must be unzipped into /opt/extensions/ not to /opt/ like suggested in the docs. Otherwise, the Lambda won't see the extension. However, when executing the function, I get the following error: AWS_Execution_Env is not supported, running lambda with default arguments. EXTENSION Name: appdynamics-extension-script State: Started Events: [] Start RequestId End RequestId Error: exit code 0 Extension.Crash Without more information. Any idea what could be causing said crash? Thanks!