All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Before this stats command also add this | filldown Appname so that empty Appname rows will adopt the name from above
Are you clear on the difference between NULL and EMPTY - your mvmap is checking for non-EMPTY values of one of the values of the MV field, it is not checking for NULL This pair of lines is looping e... See more...
Are you clear on the difference between NULL and EMPTY - your mvmap is checking for non-EMPTY values of one of the values of the MV field, it is not checking for NULL This pair of lines is looping each of the MV values of impConReqID and removing only EMPTY values  | eval ImpConReqID= coalesce(ImpConReqId,impConReqId1) ... | eval ImpCon=mvmap(ImpConReqID,if(match(ImpConReqID,".+"),"ImpConReqID: ".ImpConReqID,null())) so if you have real null values in your MV, then you need to check for null, not empty, i.e. | eval ImpCon=mvmap(ImpConReqID,if(isnotnull(ImpConReqID),"ImpConReqID: ".ImpConReqID,null()))
OK, so try the search as I expect that will give you what you want
こんにちは。初めてのため、不手際があるかもしれません。 $SPLUNK_HOME/etc/passwdに以下のフィールドがあると思いますが、<?1>と<?2>に入る内容について教えて頂きたいです。 : <ログインユーザー名> : <パスワード> : <?1> : <splunk Web上の名前> : <ロール> : <メールアドレス> : <?2> : < 「初回ログイン時にパスワードを要求... See more...
こんにちは。初めてのため、不手際があるかもしれません。 $SPLUNK_HOME/etc/passwdに以下のフィールドがあると思いますが、<?1>と<?2>に入る内容について教えて頂きたいです。 : <ログインユーザー名> : <パスワード> : <?1> : <splunk Web上の名前> : <ロール> : <メールアドレス> : <?2> : < 「初回ログイン時にパスワードを要求する」にすると[force_change_pass]の値を入力します> : <数字(何の数字か不明、uid?)> また、認識と違う箇所があれば教えてください。    
Hey @meshorer ,   Have you tried to ssh to your Phantom/SOAR server and run this? phenv set_preference --indicators yes This should be enough to reenable the indicators for you. Once executed ... See more...
Hey @meshorer ,   Have you tried to ssh to your Phantom/SOAR server and run this? phenv set_preference --indicators yes This should be enough to reenable the indicators for you. Once executed please allow it up to 5 minutes for the system to start the indicators again.
Hey Alex, You mean you are trying to use your tar.gz from gitlab, it can be the case git itself is appending any other folders to the root dir structure in that tar.gz? Check the content to see if t... See more...
Hey Alex, You mean you are trying to use your tar.gz from gitlab, it can be the case git itself is appending any other folders to the root dir structure in that tar.gz? Check the content to see if that is the case (including hidden ones). It may be the case to simple remove them and try again.
Hi @SureshkumarD, I tried out your code - the rows aren't showing up as links because of the table formatting / row color setting. Remove this line from your code: "rowColors": "> rowBackgroun... See more...
Hi @SureshkumarD, I tried out your code - the rows aren't showing up as links because of the table formatting / row color setting. Remove this line from your code: "rowColors": "> rowBackgroundColors | maxContrast(tableRowColorMaxContrast)",   That is causing the "link" effect on the clickable rows to disappear. Here's the full viz code: "visualizations": { "viz_qFxEKJ3l": { "type": "splunk.table", "options": { "count": 5000, "dataOverlayMode": "none", "drilldown": "none", "backgroundColor": "#FAF9F6", "tableFormat": { "rowBackgroundColors": "> table | seriesByIndex(0) | pick(tableAltRowBackgroundColorsByBackgroundColor)", "headerBackgroundColor": "> backgroundColor | setColorChannel(tableHeaderBackgroundColorConfig)", "headerColor": "> headerBackgroundColor | maxContrast(tableRowColorMaxContrast)" }, "eventHandlers": [ { "type": "drilldown.customUrl", "options": { "url": "$row.URL.value|n$", "newTab": true } } ],  Give that a go on your dashboard.
Splunk is not a spreadsheet, having said that you can use the stats command to "merge cells" | stats min('S No') as "S No" list(*) as * by Appname
Hello! I was able to connect fine to the controller using the provided `agent-info.xml` (after downloading the custom-config AppServerAgent-1.8-24.4.1.35880) Now, what I'm trying to, is to reproduc... See more...
Hello! I was able to connect fine to the controller using the provided `agent-info.xml` (after downloading the custom-config AppServerAgent-1.8-24.4.1.35880) Now, what I'm trying to, is to reproduce the same configuration, but solely based on env. variables; because I'll be deploying my app with a "blank" AppServerAgent-1.8-24.4.1.35880 with no custom config. Unfortunately, I can't make it work. When it works fine using the provided `agent-info.xml`, I can see in the logs: [AD Agent init] 30 May 2024 17:32:07,965 INFO XMLConfigManager - Full certificate chain validation performed using default certificate file [AD Agent init] 30 May 2024 17:32:18,898 INFO ConfigurationChannel - Auto agent registration attempted: Application Name [anthony-app] Component Name [anthony-tier] Node Name [anthony-node] [AD Agent init] 30 May 2024 17:32:18,898 INFO ConfigurationChannel - Auto agent registration SUCCEEDED! When it fails, using env. variables, I'll get [AD Agent init] 30 May 2024 17:29:22,398 INFO XMLConfigManager - Full certificate chain validation performed using default certificate file [AD Agent init] 30 May 2024 17:29:22,600 ERROR ConfigurationChannel - HTTP Request failed: HTTP/1.1 401 Unauthorized [AD Agent init] 30 May 2024 17:29:22,600 INFO ConfigurationChannel - Resetting AuthState for Proxy: [state:UNCHALLENGED;] and Target [state:FAILURE;] [AD Agent init] 30 May 2024 17:29:22,601 WARN ConfigurationChannel - Could not connect to the controller/invalid response from controller, cannot get initialization information, controller host [xxx.saas.appdynamics.com], port[443], exception [null] [AD Agent init] 30 May 2024 17:29:22,601 ERROR ConfigurationChannel - Exception: NULL Although here are all the env. variables I've set: APPDYNAMICS_CONTROLLER_HOST_NAME=xxx.saas.appdynamics.com APPDYNAMICS_AGENT_TIER_NAME=anthony-tier APPDYNAMICS_AGENT_APPLICATION_NAME=anthony-app APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY=password-copy-pasted-from-config APPDYNAMICS_AGENT_NODE_NAME=anthony-node APPDYNAMICS_CONTROLLER_PORT=443 APPDYNAMICS_CONTROLLER_SSL_ENABLED=true There's one thing that is troubling me in the logs though, is that using env. variables, I always get in the logs such messages: [AD Agent init] 30 May 2024 17:29:18,333 WARN XMLConfigManager - XML Controller Info Resolver found invalid controller host information [] in controller-info.xml; Please specify a valid value if it is not already set in system properties. [AD Agent init] 30 May 2024 17:29:18,333 WARN XMLConfigManager - XML Controller Info Resolver found invalid controller port information [] in controller-info.xml; Please specify a valid value if it is not already set in system properties. [AD Agent init] 30 May 2024 17:29:18,335 WARN XMLConfigManager - XML Controller Info Resolver found invalid controller host information [] in controller-info.xml; Please specify a valid value if it is not already set in system properties. [AD Agent init] 30 May 2024 17:29:18,335 WARN XMLConfigManager - XML Controller Info Resolver found invalid controller port information [] in controller-info.xml; Please specify a valid value if it is not already set in system properties. [AD Agent init] 30 May 2024 17:29:18,339 INFO XMLConfigManager - XML Agent Account Info Resolver did not find account name. Using default account name [customer1] [AD Agent init] 30 May 2024 17:29:18,339 WARN XMLConfigManager - XML Agent Account Info Resolver did not find account access key. [AD Agent init] 30 May 2024 17:29:18,339 INFO XMLConfigManager - Configuration Channel is using ControllerInfo:: host:[xxx.saas.appdynamics.com] port:[443] sslEnabled:[true] keystoreFile:[DEFAULT:cacerts.jks] use-encrypted-credentials:[false] secureCredentialStoreFileName:[] secureCredentialStorePassword:[] secureCredentialStoreFormat:[] use-ssl-client-auth:[false] asymmetricKeysStoreFilename:[] asymmetricKeysStorePassword:[] asymmetricKeyPassword:[] asymmetricKeyAlias:[] validation:[UNSPECIFIED] Although all env. variables were properly loaded I believe: [AD Agent init] 30 May 2024 17:29:18,295 INFO XMLConfigManager - Default Controller Info Resolver found env variable [APPDYNAMICS_CONTROLLER_HOST_NAME] for controller host name [xxx.saas.appdynamics.com] [AD Agent init] 30 May 2024 17:29:18,295 INFO XMLConfigManager - Default Controller Info Resolver found env variable [APPDYNAMICS_CONTROLLER_PORT] for controller port [443] [AD Agent init] 30 May 2024 17:29:18,295 INFO XMLConfigManager - Default Controller Info Resolver found env variable [APPDYNAMICS_CONTROLLER_SSL_ENABLED] for controller ssl enabled [true] [AD Agent init] 30 May 2024 17:29:18,303 INFO XMLConfigManager - Default Agent Account Info Resolver found env variable [APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY] for account access key [****] [AD Agent init] 30 May 2024 17:29:18,303 INFO XMLConfigManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_APPLICATION_NAME] for application name [anthony-app] [AD Agent init] 30 May 2024 17:29:18,303 INFO XMLConfigManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_TIER_NAME] for tier name [anthony-tier] [AD Agent init] 30 May 2024 17:29:18,303 INFO XMLConfigManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_NODE_NAME] for node name [anthony-node] Anyways... Is configuration solely provided by env. variable supposed to work? Thank you!
Awesome stuff! I reproduced your instructions in a step by step guide.
I would like to visualize using the Single Value visualization with and Trellis Layout and sort panels by the value of the latest field in the BY clause.  I can follow the timechart with a table and ... See more...
I would like to visualize using the Single Value visualization with and Trellis Layout and sort panels by the value of the latest field in the BY clause.  I can follow the timechart with a table and order the rows manually, but I would like something more automatic. Is there a way of specifying a field projection order via some sort of sort that can be used with timechart. I can't seem to find anything and may need to rely upon something that is an outside the box. Please advise, Tim Here is my SPL and the resulting visualization below   | mstats latest(_value) as value WHERE index="my_metrics" AND metric_name="my.app.metric.count" BY group span=15m | timechart span=15m usenull=false useother=false partial=false sum(value) AS count BY group WHERE max in top6  
I am in Vulnerability Management and a novice Splunk user.  I want to create a query to quickly determine whether we possess any assets that could be affected when a critical CVE is released.    For ... See more...
I am in Vulnerability Management and a novice Splunk user.  I want to create a query to quickly determine whether we possess any assets that could be affected when a critical CVE is released.    For example, if Cisco releases a CVE that affects Cisco Adaptive Security Appliance (ASA), I want to be able to run a query and quickly determine whether we possess any of the affected assets in our environment.    
I'm sure this has been asked to death but can I do this as an inline process during a table transforming command?
Good Afternoon,  Running a super basic test to validate i can send a POST to our Ansible Tower Listener. Search:  | makeresults | eval header="{\"content-type\":\"application/json\"}" | eval ... See more...
Good Afternoon,  Running a super basic test to validate i can send a POST to our Ansible Tower Listener. Search:  | makeresults | eval header="{\"content-type\":\"application/json\"}" | eval data="{\"Message\": \"This is a test\"}" | eval id="http://Somehost:someport/" | curl method=post urifield=id headerfield=header datafield=data debug=true The same payload in Postman works 100%  what i have noticed is that its converting the double quotes to single: curl_data_payload: {'monitor_state': 'unmonitored'} When i test this payload in Post man it also fails with the same message invalid JSON payload. Has Anyone had this issue? knows how to address it... I have no hair left to rip out. 
I'm using Splunk Enterprise 9.2.0.  We are able to update our app, which is run on a VM in vSphere, using a .tar.gz file from GitLab...  [our_app_name].tar.gz So, it works just fine on that VM, but ... See more...
I'm using Splunk Enterprise 9.2.0.  We are able to update our app, which is run on a VM in vSphere, using a .tar.gz file from GitLab...  [our_app_name].tar.gz So, it works just fine on that VM, but when I try to update the one we have running in a cloud environment in AWS, I get the following error after I upload a .tar.gz file: There was an error processing the upload.Invalid app contents: archive contains more than one immediate subdirectory: and [our_app_name] Any advice on what might be the fix for that, or how I should start troubleshooting, I would appreciate.  Thank you.
Hi guys, I have several topics on the table. 1) I would like to know if you would have any advice, process or even document defining the principles of creating a naming convention when it comes... See more...
Hi guys, I have several topics on the table. 1) I would like to know if you would have any advice, process or even document defining the principles of creating a naming convention when it comes to sourcetypes. We have faced the classic issue where some of our sourcetypes had the same name and we would like to find ways to avoid that from now on. 2) Is it pertinent to add predefined / learned sourcetypes with a naming convention based on the format.? (then we could solve the point 1 with a naming convention like app_format for example). How do you technically add new predefined sourcetypes and how do you solve both the management of sourcetypes (point 1) and the management of predefined sourcetypes  ? 3) How do you share Knowledge Objects, props and transforms between 2 Search Head clusters, how do you implement a continuously synced mechanism that would keep these objects synced between both clusters ? Do we have to use Ansible deployments to perform the changes on both clusters or is there any Splunk way to achieve this synchronization in an easier way inside Splunk (via a script using REST API, command line, configuration etc) ?
Did you end up solving the issue with the customer having failed serches (more often that not), but the system resources are not even being used? I am running into the same issue on 9.2.1 running on ... See more...
Did you end up solving the issue with the customer having failed serches (more often that not), but the system resources are not even being used? I am running into the same issue on 9.2.1 running on RHEL 8.
Hi @tscroggins, Thanks for all your comments, I'm running with 8.2v and the 1st suggestion you made worked, but we didn't see the changes until the restart of the Search heads were made.  Now t... See more...
Hi @tscroggins, Thanks for all your comments, I'm running with 8.2v and the 1st suggestion you made worked, but we didn't see the changes until the restart of the Search heads were made.  Now the CSV files are comming with the right format. One thing I noticed, If I clone an existing report with CSV format configuration, the new one will adopt that configuration too. Thanks
We some information to work with.  What is the full text of the error message?  What is the setting/query that generated the error?
What do you mean by "migration"? As with any other similar solution you typically don't migrate data already ingested/indexed/saved/whatever in the original solution. You just spin up a new Splunk en... See more...
What do you mean by "migration"? As with any other similar solution you typically don't migrate data already ingested/indexed/saved/whatever in the original solution. You just spin up a new Splunk environment and onboard sources you had sending to the old SIEM (often if possible you try to have a transition period when sources send to both old solution and Splunk). Any "logic" like assets databases, alert rules, reports and so on have to be manually created to give you desired business outcome. It's best if you engage your local friendly Splunk Partner in such project.