All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have a table of items and I need to convert the results in the rows "pa_name" and "pa_valor" to columns and keep the relacion of each one. An example, I have this: index="items" id IN (34... See more...
Hello, I have a table of items and I need to convert the results in the rows "pa_name" and "pa_valor" to columns and keep the relacion of each one. An example, I have this: index="items" id IN (3438776 3131202 3438780) | stats latest(pa_nome) AS pa_nome latest(ipa_valor) AS ipa_valor BY id | makemv delim=";" pa_nome | makemv delim=";" ipa_valor I want this: Can someone help me?
Hello Community! I have a file which will be renewed once a day. Often the output is the same as the output before. So it happens that Splunk doesn't index the file and the content isn't available... See more...
Hello Community! I have a file which will be renewed once a day. Often the output is the same as the output before. So it happens that Splunk doesn't index the file and the content isn't available in a subsearch with a timefilter last24h.   So there are some config attributes like crcSalt to tellSplunk, it should read the file if the crc hash has changed, but in this time I think its worthless because the crcSalt is the same of the whole file. So what can make sense is to, tell Splunk to use only the mtime of the file, but I haven't seen such a setting. Do you have a hint, how I can read the file with same content, same crc and same size but an other timestamp. Thanks
I have a VMWare environment with 12 VMWare hosts.  Currently we are using splunk cloud and we are using splunk agents to forward logs for  windows virtual machines. Now I want to monitor vmware hos... See more...
I have a VMWare environment with 12 VMWare hosts.  Currently we are using splunk cloud and we are using splunk agents to forward logs for  windows virtual machines. Now I want to monitor vmware hosts(mainly performance metrics). I started checking splunk documentation, but I am a little bit confused what should I use. It looks like till now there was an app- Splunk App for Infrastructure that is used to monitor vmware, and now there is some IT Essentials app, and there are some vmware add-ons, not sure where and how they are used. Could you please guide me what product should I use, does it has separate licenses that need to be purchased.  Because we are using splunk cloud, do I need to install something on-premises in order to monitor vmware and forward to splunk cloud. Thanks in advance for your help.
what is the approach to perform a restart in the splunk cloud free trial.  I know that feature is disabled in the GUI, any other approaches?
Hi everyone, I have two event: first event with the event_name=LOGIN second event with event_name LOGOUT I need to get only events with event_name=LOGIN, but only if the event_name=LOGIN time i... See more...
Hi everyone, I have two event: first event with the event_name=LOGIN second event with event_name LOGOUT I need to get only events with event_name=LOGIN, but only if the event_name=LOGIN time is newer then the event_name LOGOUT Is there a possibility to do so? Thank you very much for helping me!
Hi, I am trying to get Token from dashboard, "Settings" -> "Tokens". However choosing "Tokens", The screen continue to display "Loading...". I've waited it over a hours... The splunk version is 8.... See more...
Hi, I am trying to get Token from dashboard, "Settings" -> "Tokens". However choosing "Tokens", The screen continue to display "Loading...". I've waited it over a hours... The splunk version is 8.0.1. I would appreciate it if you have any advice. Thank you!
Hello, i was wondering if it is possible for aesthetic reasons to stack two text inputs above each other (instead of next to each other). I tried something like this:   <form> <label>stacked_inp... See more...
Hello, i was wondering if it is possible for aesthetic reasons to stack two text inputs above each other (instead of next to each other). I tried something like this:   <form> <label>stacked_inputs</label> <fieldset submitButton="false"></fieldset> <row> <panel depends="$alwaysHideCSSStyle$"> <html> <style> #test_input2 { margin-top: 100px !important; padding-left:-200px !important; } </style> </html> </panel> </row> <row> <panel> <input id="test_input1" type="text" token="field1"> <label>field1</label> </input> <input id="test_input2" type="text" token="field2"> <label>field2</label> </input> </panel> </row> </form>   But the second input "test_input2" cannot be shifted horizontally above input "test_input1" apparently. Does anyone know how to do this?   Cheers Fritz  
Hi, I'm using      | sim flow query="<My query>" format=table org_id=<ID> resolution=900000     For my metric query, above query accepts a parameter named "resolution". Since i'm using the tim... See more...
Hi, I'm using      | sim flow query="<My query>" format=table org_id=<ID> resolution=900000     For my metric query, above query accepts a parameter named "resolution". Since i'm using the time frame, instead of adding all the values it is giving data in 15 mins time interval (default 5 mins) I like to have a resolution value based on the tie picker in the graph. For example, if i choose 60 minutes in the time picker, resolution value should be modified to 3600000 If anyone have any idea on how to set infinite/ exact time range in resolution parameter. Please help on this. Thanks in advance.
  I am accessing logs with generic s3 and cisco bucket is self manages.i am getting below error.     2021-05-17 06:30:47,384 level=ERROR pid=57756 tid=Thread-7 logger=splunk_ta_aws.modinputs... See more...
  I am accessing logs with generic s3 and cisco bucket is self manages.i am getting below error.     2021-05-17 06:30:47,384 level=ERROR pid=57756 tid=Thread-7 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:index_data:91 | datainput="application_cisco_test" bucket_name="wellsky-cisco-umbrella-logs-d133" | message="Failed to collect data through generic S3." start_time=1621233047 job_uid="a26e5872-3832-4df1-b3cf-bb9f6efdf596" Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_data_loader.py", line 86, in index_data self._do_index_data() File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_data_loader.py", line 107, in _do_index_data self.collect_data() File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_data_loader.py", line 153, in collect_data self._discover_keys(index_store) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_data_loader.py", line 224, in _discover_keys bucket = self._get_bucket(credentials) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_data_loader.py", line 370, in _get_bucket bucket = conn.get_bucket(self._config[asc.bucket_name]) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/boto/s3/connection.py", line 509, in get_bucket return self.head_bucket(bucket_name, headers=headers) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/boto/s3/connection.py", line 542, in head_bucket raise err boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden Collapse ErrorCode = boto.exception.S3ResponseError ErrorDetail = S3ResponseError: 403 Forbidden host = sdsd logger = splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader message = Failed to collect data through generic S3. source = /opt/splunk/var/log/splunk/splunk_ta_aws_generic_aws_s3_application_cisco_test.log sourcetype = aws:s3:log tag = error
Hi Team, I am planning to draw the trend of the webpage test speed index trend in Splunk which gives the performance of the speed index for the current week, last week, and last before week. I devel... See more...
Hi Team, I am planning to draw the trend of the webpage test speed index trend in Splunk which gives the performance of the speed index for the current week, last week, and last before week. I developed the below query to do the job, but I am not getting the trend and query giving some error. Can someone please help me to draw the trend?   index=nextgen sourcetype=lighthouse_json sourcetype=lighthouse_json datasource=webpagetest step="Homepage" | eval myTime=case(test >= relative_time(now(), "-7d"), "CurrentWeek", test >= relative_time(now(), "-14d") AND test < relative_time(now(), "-7d", "PriorWeek", test >= relative_time(now(), "-21d") AND test < relative_time(now(), "-14d", "ThirdWeek", 1=1, "Other") | stats values(speedindex) as score by myTime | eval fast = if(score>0 AND score<2000,score,0) | eval moderate = if(score>=2000 AND score<2999,score,0) | eval slow = if(score>=3000,score,0) | fields - score  
I am trying to solve a problem where Splunk DB Connect [DBX] can leverage Azure AD authentication through JDBC driver for Azure MS SQL Server access. Currently, upon trying to achieve similar, DBX S... See more...
I am trying to solve a problem where Splunk DB Connect [DBX] can leverage Azure AD authentication through JDBC driver for Azure MS SQL Server access. Currently, upon trying to achieve similar, DBX Server ends up into a java exception -   2021-05-15 12:44:31.032 +0800 [dw-56 - POST /api/connections/status] ERROR io.dropwizard.jersey.errors.LoggingExceptionMapper - Error handling a request: be206a071cdf0267 java.lang.NoClassDefFoundError: com/microsoft/aad/msal4j/IClientCredential ... ... Caused by: java.lang.ClassNotFoundException: com.microsoft.aad.msal4j.IClientCredential at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 92 common frames omitted   Microsoft has already enhanced its JDBC Driver to work with Azure AD Auth instead of native MS SQL user based -  https://docs.microsoft.com/ja-jp/sql/connect/jdbc/connecting-using-azure-active-directory-authentication?view=sql-server-ver15  Point #1 & #2 are about the use-cases one of which I would want make use of, #3 is something that is currently documented and works just fine. 1 - ActiveDirectoryPassword Supported in driver version v6.0 and later, you authentication=ActiveDirectoryPassword can use your Azure AD username and password to connect to Azure SQL Database and Synapse Analytics. Illustration of the JDBC String Usage –   jdbc:sqlserver://<instance_url>:<instance_port>;database=<db>;encrypt=true;trustServerCertificate=false;hostNameInCertificate=<instance_domain>;loginTimeout=30;authentication=ActiveDirectoryPassword   2 - ActiveDirectoryServicePrincipal Supported in driver version v9.2 and later, you authentication=ActiveDirectoryServicePrincipal can use the client ID and secret of the service principal ID to connect to Azure SQL Database and Synapse Analytics. Illustration of the JDBC String Usage –   jdbc:sqlserver://<instance_url>:<instance_port>;database=<db>;encrypt=true;trustServerCertificate=false;hostNameInCertificate=<instance_domain>;loginTimeout=30;authentication=ActiveDirectoryServicePrincipal;aadSecurePrincipalId=<secure_principal_id>;aadSecurePrincipalSecret=<secure_principal_secret>     3 - SqlPassword - works just fine authentication=SqlPassword Use to connect to SQL Server using the userName or user and password properties.   - I have approached Splunk Support - They say its beyond break fix - Then, Professional Services - They say its more of OnDemand - Then, OnDemand - Says it needs to be an enhancement, so should be an Idea - Now, Idea - is where I am at this moment So, I am approaching to the community if someone has solved this problem then please share the solution / workaround, else please upvote the idea, so that it gets the due attention. Idea - https://ideas.splunk.com/ideas/EID-I-987    Splunk Premium Customers and Partners would like to see Splunk DB Connect [DBX] Server enhanced to leverage Azure AD authentication through JDBC driver for Azure MS SQL Server access. Currently, upon trying to achieve similar, DBX Server ends up into a java exception, detailed into the Support case attached with this idea request. Microsoft has already enhanced its JDBC Driver to work with Azure AD Auth instead of native MS SQL user based. Idea is simple - DBX [Req Initiation] ->> JDBC [Request Handover] ->> Azure AD [Request Auth] ->> Azure MS SQL [Request Fulfillment]
Tokens in notable event titles and descriptions not getting expanded to include the values of the tokens on the Incident Review dashboard, though the search Time extractions does exists in the nota... See more...
Tokens in notable event titles and descriptions not getting expanded to include the values of the tokens on the Incident Review dashboard, though the search Time extractions does exists in the notable event Example:- My Notable Event Title Contains the following text Suspicious activity detected from User $UserId$ Expected Output for Title in Incident Review Dashboard Suspicious activity detected from User XYZ How the Title Column gets displayed in Incident Review dashboard Suspicious activity detected from User $UserId$ However, the extractions does work for built-in extractions or fields like sourcetype etc.  Tokens work with few extractions, and not with few even though both of the search time extractions does exists in the notable events
What query do we need to use to get the contributing events details for "Protocol or Port Mismatch" alert and also what actions do we need to take if we receive this kind of alerts?
Hi, i'm looking for a solution which only show the last and last-1 result using stats or streamstats function.  Aim is to only display something like max(row) and max(row)-1 my search... | stats va... See more...
Hi, i'm looking for a solution which only show the last and last-1 result using stats or streamstats function.  Aim is to only display something like max(row) and max(row)-1 my search... | stats values(product_tag*) as product_tag* values(*) as * by product,color,product_tag outcome product color product_tag description phone red abc_1 blabla1 phone red abc_2 blabla2 phone red abc_3 blabla3 phone red abc_4 blabla4   desired outcome product color product_tag description phone red abc_3 blabla3 phone red abc_4 blabla4  or  product color product_tag description phone red abc_4 blabla4 phone red abc_3 blabla3  
Description                      Recorded value for [Turn On Test 123] Recorded value for [Turn On Test 456] Execute all Appliances In process to Execute   I would like to create another field ... See more...
Description                      Recorded value for [Turn On Test 123] Recorded value for [Turn On Test 456] Execute all Appliances In process to Execute   I would like to create another field name "Status" whereby it only extract "Turn On" for "Recorded value for [Turn On Test xxx]" and "Execute" for "Execute all Appliances" & "In process to Execute"
Similar to fetching config by namespace via REST - Configuration Endpoints, is there a way to access the .spec defined for different config files via REST API? Edit: spelling.
SplunkEPのバージョンを「v7.3.2」→「v8.1.4」へバージョンアップしました。 SplunkEPのv8以降より、Python3へのサポートとなったかと思いが\splunk\bin配下のpython.exeの構成が以下の通り変更になっているかと思います。   ●v7.3.2 \splunk\bin\python.exe   ●v8.1.4 \splunk\bin\pyt... See more...
SplunkEPのバージョンを「v7.3.2」→「v8.1.4」へバージョンアップしました。 SplunkEPのv8以降より、Python3へのサポートとなったかと思いが\splunk\bin配下のpython.exeの構成が以下の通り変更になっているかと思います。   ●v7.3.2 \splunk\bin\python.exe   ●v8.1.4 \splunk\bin\python.exe \splunk\bin\python2.exe \splunk\bin\python3.exe   v8以降のバージョンではpython実行の際「python.exe」/ 「python2.exe」/「python3.exe」のいずれかが実行されるかたちになるのか、全て利用されるかたちになるのかご確認させていただきたく思います。 ご確認させていただいている意図としましては、SplunkEPv7.3.2までは、FireWall側のPython制御として「\splunk\bin\python.exe」を対象に制御していましたが、v8以降は「python2.exe」や「python3.exe」を追加すべきなのか、特定の「python.exe」を制御すればOKなのか含め確認させていただきたく思います。 SplunkEPの利用用途ですが、「Splunk Add-on for Box」にてBOXログを収集し、ログ分析を中心に利用しております。宜しくお願い致します。    
hello, Can any one help me which Java version I need to install for Splunk Enterprise 7.3.8 with DB Connect  Splunk DB Connect Version: 3.5.0 Build: 3  and what JAVA_PATH should look like?  
DROPDOWN - I want to create one dashboard. While creating Country Dropdown I want only those countries in dropdown which my team support(US, UK, AUSTRALIA ETC.) but we are getting data for All countr... See more...
DROPDOWN - I want to create one dashboard. While creating Country Dropdown I want only those countries in dropdown which my team support(US, UK, AUSTRALIA ETC.) but we are getting data for All countries. Query - index=* country="US"  OR country="UK" OR country="AUSTRALIA" | table country | dedup country  The above Query give me the perfect result in dropdown. But I want to add one more option "All" in the dropdown which gives me the above mentioned country data only. Is there anyway?
Hi Everyone, How can I extract the highlighted field from raw logs: ARC EVENT RECEIVED FROM SOURCE ,RoutingPath:blaze-team_e-dmrupload ARC EVENT RECEIVED FROM SOURCE ,RoutingPath:blaze-team_g ARC... See more...
Hi Everyone, How can I extract the highlighted field from raw logs: ARC EVENT RECEIVED FROM SOURCE ,RoutingPath:blaze-team_e-dmrupload ARC EVENT RECEIVED FROM SOURCE ,RoutingPath:blaze-team_g ARC SUCCESSFULLY UPDATED RESPONSE BACK TO SOURCE OR SF ,RoutingPath:blaze-team_e-dmrupload, Body:null ARC SUCCESSFULLY UPDATED RESPONSE BACK TO SOURCE OR SF ,RoutingPath:blaze-team_g ,Body:{ Thanks in advance