All Topics

Top

All Topics

こんにちは。初めてのため、不手際があるかもしれません。 $SPLUNK_HOME/etc/passwdに以下のフィールドがあると思いますが、<?1>と<?2>に入る内容について教えて頂きたいです。 : <ログインユーザー名> : <パスワード> : <?1> : <splunk Web上の名前> : <ロール> : <メールアドレス> : <?2> : < 「初回ログイン時にパスワードを要求... See more...
こんにちは。初めてのため、不手際があるかもしれません。 $SPLUNK_HOME/etc/passwdに以下のフィールドがあると思いますが、<?1>と<?2>に入る内容について教えて頂きたいです。 : <ログインユーザー名> : <パスワード> : <?1> : <splunk Web上の名前> : <ロール> : <メールアドレス> : <?2> : < 「初回ログイン時にパスワードを要求する」にすると[force_change_pass]の値を入力します> : <数字(何の数字か不明、uid?)> また、認識と違う箇所があれば教えてください。    
Hello! I was able to connect fine to the controller using the provided `agent-info.xml` (after downloading the custom-config AppServerAgent-1.8-24.4.1.35880) Now, what I'm trying to, is to reproduc... See more...
Hello! I was able to connect fine to the controller using the provided `agent-info.xml` (after downloading the custom-config AppServerAgent-1.8-24.4.1.35880) Now, what I'm trying to, is to reproduce the same configuration, but solely based on env. variables; because I'll be deploying my app with a "blank" AppServerAgent-1.8-24.4.1.35880 with no custom config. Unfortunately, I can't make it work. When it works fine using the provided `agent-info.xml`, I can see in the logs: [AD Agent init] 30 May 2024 17:32:07,965 INFO XMLConfigManager - Full certificate chain validation performed using default certificate file [AD Agent init] 30 May 2024 17:32:18,898 INFO ConfigurationChannel - Auto agent registration attempted: Application Name [anthony-app] Component Name [anthony-tier] Node Name [anthony-node] [AD Agent init] 30 May 2024 17:32:18,898 INFO ConfigurationChannel - Auto agent registration SUCCEEDED! When it fails, using env. variables, I'll get [AD Agent init] 30 May 2024 17:29:22,398 INFO XMLConfigManager - Full certificate chain validation performed using default certificate file [AD Agent init] 30 May 2024 17:29:22,600 ERROR ConfigurationChannel - HTTP Request failed: HTTP/1.1 401 Unauthorized [AD Agent init] 30 May 2024 17:29:22,600 INFO ConfigurationChannel - Resetting AuthState for Proxy: [state:UNCHALLENGED;] and Target [state:FAILURE;] [AD Agent init] 30 May 2024 17:29:22,601 WARN ConfigurationChannel - Could not connect to the controller/invalid response from controller, cannot get initialization information, controller host [xxx.saas.appdynamics.com], port[443], exception [null] [AD Agent init] 30 May 2024 17:29:22,601 ERROR ConfigurationChannel - Exception: NULL Although here are all the env. variables I've set: APPDYNAMICS_CONTROLLER_HOST_NAME=xxx.saas.appdynamics.com APPDYNAMICS_AGENT_TIER_NAME=anthony-tier APPDYNAMICS_AGENT_APPLICATION_NAME=anthony-app APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY=password-copy-pasted-from-config APPDYNAMICS_AGENT_NODE_NAME=anthony-node APPDYNAMICS_CONTROLLER_PORT=443 APPDYNAMICS_CONTROLLER_SSL_ENABLED=true There's one thing that is troubling me in the logs though, is that using env. variables, I always get in the logs such messages: [AD Agent init] 30 May 2024 17:29:18,333 WARN XMLConfigManager - XML Controller Info Resolver found invalid controller host information [] in controller-info.xml; Please specify a valid value if it is not already set in system properties. [AD Agent init] 30 May 2024 17:29:18,333 WARN XMLConfigManager - XML Controller Info Resolver found invalid controller port information [] in controller-info.xml; Please specify a valid value if it is not already set in system properties. [AD Agent init] 30 May 2024 17:29:18,335 WARN XMLConfigManager - XML Controller Info Resolver found invalid controller host information [] in controller-info.xml; Please specify a valid value if it is not already set in system properties. [AD Agent init] 30 May 2024 17:29:18,335 WARN XMLConfigManager - XML Controller Info Resolver found invalid controller port information [] in controller-info.xml; Please specify a valid value if it is not already set in system properties. [AD Agent init] 30 May 2024 17:29:18,339 INFO XMLConfigManager - XML Agent Account Info Resolver did not find account name. Using default account name [customer1] [AD Agent init] 30 May 2024 17:29:18,339 WARN XMLConfigManager - XML Agent Account Info Resolver did not find account access key. [AD Agent init] 30 May 2024 17:29:18,339 INFO XMLConfigManager - Configuration Channel is using ControllerInfo:: host:[xxx.saas.appdynamics.com] port:[443] sslEnabled:[true] keystoreFile:[DEFAULT:cacerts.jks] use-encrypted-credentials:[false] secureCredentialStoreFileName:[] secureCredentialStorePassword:[] secureCredentialStoreFormat:[] use-ssl-client-auth:[false] asymmetricKeysStoreFilename:[] asymmetricKeysStorePassword:[] asymmetricKeyPassword:[] asymmetricKeyAlias:[] validation:[UNSPECIFIED] Although all env. variables were properly loaded I believe: [AD Agent init] 30 May 2024 17:29:18,295 INFO XMLConfigManager - Default Controller Info Resolver found env variable [APPDYNAMICS_CONTROLLER_HOST_NAME] for controller host name [xxx.saas.appdynamics.com] [AD Agent init] 30 May 2024 17:29:18,295 INFO XMLConfigManager - Default Controller Info Resolver found env variable [APPDYNAMICS_CONTROLLER_PORT] for controller port [443] [AD Agent init] 30 May 2024 17:29:18,295 INFO XMLConfigManager - Default Controller Info Resolver found env variable [APPDYNAMICS_CONTROLLER_SSL_ENABLED] for controller ssl enabled [true] [AD Agent init] 30 May 2024 17:29:18,303 INFO XMLConfigManager - Default Agent Account Info Resolver found env variable [APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY] for account access key [****] [AD Agent init] 30 May 2024 17:29:18,303 INFO XMLConfigManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_APPLICATION_NAME] for application name [anthony-app] [AD Agent init] 30 May 2024 17:29:18,303 INFO XMLConfigManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_TIER_NAME] for tier name [anthony-tier] [AD Agent init] 30 May 2024 17:29:18,303 INFO XMLConfigManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_NODE_NAME] for node name [anthony-node] Anyways... Is configuration solely provided by env. variable supposed to work? Thank you!
I would like to visualize using the Single Value visualization with and Trellis Layout and sort panels by the value of the latest field in the BY clause.  I can follow the timechart with a table and ... See more...
I would like to visualize using the Single Value visualization with and Trellis Layout and sort panels by the value of the latest field in the BY clause.  I can follow the timechart with a table and order the rows manually, but I would like something more automatic. Is there a way of specifying a field projection order via some sort of sort that can be used with timechart. I can't seem to find anything and may need to rely upon something that is an outside the box. Please advise, Tim Here is my SPL and the resulting visualization below   | mstats latest(_value) as value WHERE index="my_metrics" AND metric_name="my.app.metric.count" BY group span=15m | timechart span=15m usenull=false useother=false partial=false sum(value) AS count BY group WHERE max in top6  
I am in Vulnerability Management and a novice Splunk user.  I want to create a query to quickly determine whether we possess any assets that could be affected when a critical CVE is released.    For ... See more...
I am in Vulnerability Management and a novice Splunk user.  I want to create a query to quickly determine whether we possess any assets that could be affected when a critical CVE is released.    For example, if Cisco releases a CVE that affects Cisco Adaptive Security Appliance (ASA), I want to be able to run a query and quickly determine whether we possess any of the affected assets in our environment.    
Good Afternoon,  Running a super basic test to validate i can send a POST to our Ansible Tower Listener. Search:  | makeresults | eval header="{\"content-type\":\"application/json\"}" | eval ... See more...
Good Afternoon,  Running a super basic test to validate i can send a POST to our Ansible Tower Listener. Search:  | makeresults | eval header="{\"content-type\":\"application/json\"}" | eval data="{\"Message\": \"This is a test\"}" | eval id="http://Somehost:someport/" | curl method=post urifield=id headerfield=header datafield=data debug=true The same payload in Postman works 100%  what i have noticed is that its converting the double quotes to single: curl_data_payload: {'monitor_state': 'unmonitored'} When i test this payload in Post man it also fails with the same message invalid JSON payload. Has Anyone had this issue? knows how to address it... I have no hair left to rip out. 
At time of registration, I have accessed classes, and it was working fine. But after Re-login the page I am getting this error. someone please help on this.
I'm using Splunk Enterprise 9.2.0.  We are able to update our app, which is run on a VM in vSphere, using a .tar.gz file from GitLab...  [our_app_name].tar.gz So, it works just fine on that VM, but ... See more...
I'm using Splunk Enterprise 9.2.0.  We are able to update our app, which is run on a VM in vSphere, using a .tar.gz file from GitLab...  [our_app_name].tar.gz So, it works just fine on that VM, but when I try to update the one we have running in a cloud environment in AWS, I get the following error after I upload a .tar.gz file: There was an error processing the upload.Invalid app contents: archive contains more than one immediate subdirectory: and [our_app_name] Any advice on what might be the fix for that, or how I should start troubleshooting, I would appreciate.  Thank you.
Hi guys, I have several topics on the table. 1) I would like to know if you would have any advice, process or even document defining the principles of creating a naming convention when it comes... See more...
Hi guys, I have several topics on the table. 1) I would like to know if you would have any advice, process or even document defining the principles of creating a naming convention when it comes to sourcetypes. We have faced the classic issue where some of our sourcetypes had the same name and we would like to find ways to avoid that from now on. 2) Is it pertinent to add predefined / learned sourcetypes with a naming convention based on the format.? (then we could solve the point 1 with a naming convention like app_format for example). How do you technically add new predefined sourcetypes and how do you solve both the management of sourcetypes (point 1) and the management of predefined sourcetypes  ? 3) How do you share Knowledge Objects, props and transforms between 2 Search Head clusters, how do you implement a continuously synced mechanism that would keep these objects synced between both clusters ? Do we have to use Ansible deployments to perform the changes on both clusters or is there any Splunk way to achieve this synchronization in an easier way inside Splunk (via a script using REST API, command line, configuration etc) ?
I want to merge the cells in column S.No and share the output to the requestor. The only ask is Splunk should take all the values seperated in different colours and send three different emails. Ex ... See more...
I want to merge the cells in column S.No and share the output to the requestor. The only ask is Splunk should take all the values seperated in different colours and send three different emails. Ex    S.No. 1 2 3 4 5 6 7 8 9 10 11 12 I should send emails to  S.No  1 4 9  
Hi Splunk Community,   I need help to write a Splunk query to join two different indexes using any Splunk command that will satisfy the logic noted below.   Two separate ID fields in Index 2 ... See more...
Hi Splunk Community,   I need help to write a Splunk query to join two different indexes using any Splunk command that will satisfy the logic noted below.   Two separate ID fields in Index 2 must match two separate ID fields in Index 1 using any permutation of index 2’s three ID fields. Here is an outline of the logic below.   Combine index 1 record with index 2 record into a single record when any matching condition is satisfied below: (ID_1_A=ID_2_A AND ID_1_B=ID_2_B) OR (ID_1_A=ID_2_A AND ID_1_B=ID_2_C) OR (ID_1_A=ID_2_B AND ID_1_B=ID_2_A) OR (ID_1_A=ID_2_B AND ID_1_B=ID_2_C) OR (ID_1_A=ID_2_C AND ID_1_B=ID_2_A) OR (ID_1_A=ID_2_C AND ID_1_B=ID_2_B) Sample Data:   Index 1: ----------------------- |ID_1_A |ID_1_B| ------------------------ |123       |345      | ------------------------ | 345      |123      | ------------------------   Index 2: ________________________ |ID_2_A   | ID_2_B   | ID_2_C| ---------------------------------------- |123         |345           |999       | ---------------------------------------- |123         |999           |345       | ---------------------------------------- |345         |123            |999      | ---------------------------------------- |999         |123           | 345       | ---------------------------------------- | 345       | 999           |123        | ----------------------------------------   Any help would be greatly appreciated.   Thanks.  
Hi, We are collecting the logs directly though UF and HEC in the indexer cluster. All inputs are defined in Cluster Manager and then bundle is applied to indexers.  Currently we are sending the dat... See more...
Hi, We are collecting the logs directly though UF and HEC in the indexer cluster. All inputs are defined in Cluster Manager and then bundle is applied to indexers.  Currently we are sending the data to Splunk indexers as well as to other output group. Following is the config of outputs.conf under peer-apps:   [tcpout] indexandForward=true   [tcpout:send2othergroup] server=..... sslPassword=..... sendCookedData=true   This config is currently sending the same data to both the outputs. Indexing locally and then forwarding to another group.  Is there a way to keep some indexes to be indexed locally and some to be sent only to another group? I tried using props and transforms by including _TCP_ROUTING but it is not working at all.    Thanks in advance!  
Hello I'm using the transaction function to compute average duration and identify uncompleted transactions. Assuming only the events within the selected timerange are taken into account by default,... See more...
Hello I'm using the transaction function to compute average duration and identify uncompleted transactions. Assuming only the events within the selected timerange are taken into account by default, it means that the transactions which start within the selected timerange of the search but ends after are counted as uncompleted transactions.  How can I do to extend the search out of the range for the uncompleted transactions?  StartSearchtime > StartTransaction > EndTransaction > EndSearchTime = OK  StartSearchtime > StartTransaction  > EndSearchTime = True KO (case where the EndTransaction never happened) StartSearchtime > StartTransaction > EndSearchTime > EndTransaction = False KO (case where EndTransaction exists but can only be found after the selected timerange) Extending the EndSearchTime is not the solution, as the service runs 24/7,  new transactions started within the extended slot will then end up with potential EndTransaction out of the new range.  Thanks for your help. Flo
currently for asset correlation with ips we have infoblox ,but that only works when we are in the company premises and ip assigned on asset is part of company network.when someone works from home and... See more...
currently for asset correlation with ips we have infoblox ,but that only works when we are in the company premises and ip assigned on asset is part of company network.when someone works from home and the ip of asset changes due to personal internet that ip does not get added to the asset lookup as its not part of infoblox flow. i was thinking maybe using zscaler to add ip details for the asset but if there is any successful way someone used to mitigate this would be helpful .    
Is there a way to run a search for all correlation searches and see their response actions?  I want to see what correlation searches create notable events and which ones do not.  For example,  which ... See more...
Is there a way to run a search for all correlation searches and see their response actions?  I want to see what correlation searches create notable events and which ones do not.  For example,  which ones only increase risk score.  I had hoped to use /services/alerts/correlationsearches however it doesn't appear that endpoint exists anymore?    
Hi All, i am trying reduce the width of 2nd and 3rd column of a table since some of the cell has big sentence and it occupies too much space. i tried referring an example like below. <row> <pan... See more...
Hi All, i am trying reduce the width of 2nd and 3rd column of a table since some of the cell has big sentence and it occupies too much space. i tried referring an example like below. <row> <panel> <html depends="$alwaysHideCSSPanel$"> <style> #tableColumWidth table thead tr th:nth-child(2), #tableColumWidth table thead tr th:nth-child(3){ width: 10% !important; overflow-wrap: anywhere !important; } </style> </html> <table id="tableColumWidth">   But i am not able to change the width using this. Any corrections needed in above html?
Hi  I want to know if it is possible to show the number of impacted records in last 15 mins for the below search:  Query: index = events_prod_tio_omnibus_esa ( "SESE023" OR "SESE020" OR "SESE030"... See more...
Hi  I want to know if it is possible to show the number of impacted records in last 15 mins for the below search:  Query: index = events_prod_tio_omnibus_esa ( "SESE023" OR "SESE020" OR "SESE030" ) Result:    Requirement :  For the above search, if the search is executed at : 11:30 ==> It will show 0 records  11:40 ==> It will show 2 records (as the last event raised on 11:37:14 is having 2 records and currenttime - event time < 15 mins) 11:50 ==> It will show 2 records (as the last event raised on 11:37:14 is having 2 records and currenttime - event time < 15 mins) 11:55 ==> It will show 0 records (as the last event raised on 11:37:14 is having 2 records but currenttime - event time >15 mins)  
Hi Community, I'm working on script input. I have created a script to convert binary code logs into human read-able format and it is working fine. Now the issue is the file im monitoring is in "/v... See more...
Hi Community, I'm working on script input. I have created a script to convert binary code logs into human read-able format and it is working fine. Now the issue is the file im monitoring is in "/var/log/test" directory.  The script is in "/opt/splunk/etc/apps/testedScript/bin/testedscript.sh" directory. I'm getting script address as source in Splunk. Attaching screenshot as reference.  below is my inputs.conf stanza I'm using (/opt/splunk/etc/apps/testScript/local/inputs.conf): [script:///opt/splunk/etc/apps/testScript/bin/testedScript.sh] disabled=false index=testing interval=30 sourcetype=free2 Is there anyway i can get exact source address like in my case it is "/var/log/test/file1"
I just received a mail stating, past June 14 we won't be even able to view the past support tickets.  I see it as a blocker for learning. Because whenever I face an issue, I refer to the past tickets... See more...
I just received a mail stating, past June 14 we won't be even able to view the past support tickets.  I see it as a blocker for learning. Because whenever I face an issue, I refer to the past tickets and learn from that before actually creating a ticket. Past tickets can be available atleast as HTML to view. Kindly let me know if there are any such plans.
Hi, I have ingested the qualys data using the Qualys TA addon and enabled the inputs to run once every 24 hours. Im ingesting the host detection and knowledge logs into Splunk. The requirement is ... See more...
Hi, I have ingested the qualys data using the Qualys TA addon and enabled the inputs to run once every 24 hours. Im ingesting the host detection and knowledge logs into Splunk. The requirement is to create a dashboard with multiple multiselect filters and do the enrichment from our database. But I found that the data in qualys is different from Splunk logs. And the inputs is ingesting only a certain amount of data.   My ask is I want to ingest complete data every time the inputs runs , so that I get accurate data and use it in dashboards. Please help me.   Regards, Dayal