All Topics

Top

All Topics

Hi, I am new to the community and I am having a few problems to use appdynamics with golang. I followed the tutorial and succefully connected my GO application with the SDK, but I still have a few q... See more...
Hi, I am new to the community and I am having a few problems to use appdynamics with golang. I followed the tutorial and succefully connected my GO application with the SDK, but I still have a few questions. First of all, do I need to manually insert a business transaction to every external call i wish to observe? For example: func (c userRepository) getAll() { appd.AddBackend("backTest", "DB", map[string]string{"DATABASE": "mongodb",}, false) btHandle := appd.StartBT("database 1", "") ecHandle := appd.StartExitcall(btHandle, "backTest") // calling database if err != nil { appd.AddBTError(btHandle, appd.ErrorLevel(2), "error message", true) } appd.EndExitcall(ecHandle) appd.EndBT(btHandle) return ... } func (c userRepository) getById() { appd.AddBackend("backTest", "DB", map[string]string{"DATABASE": "mongodb",}, false) btHandle := appd.StartBT("database 1", "") ecHandle := appd.StartExitcall(btHandle, "backTest") // calling database again if err != nil { appd.AddBTError(btHandle, appd.ErrorLevel(2), "another error message", true) } appd.EndExitcall(ecHandle) appd.EndBT(btHandle) return ... }  And should I do it manually for every external call that i make? (external API, Database, ....) I find it too time consuming  it and makes the code longer and more difficult to read. Can anyone tell me if I am using the SDK properly or if there is something that i am missing and could make it simpler?
I'm trying to build a search that returns the changes that were made to the GPO. For this, I have my main search that looks for EventCode 4662, 5137, 5136, 5141 that are related to changes in the G... See more...
I'm trying to build a search that returns the changes that were made to the GPO. For this, I have my main search that looks for EventCode 4662, 5137, 5136, 5141 that are related to changes in the GPO, but do not bring what the change was specifically. I have another index (AD AUDIT) that logs all changes. I'm trying to use join, but I can't get the changes to return. Changes have more than one field for each GPO_GUID. My search looks like this: index=win (EventCode=4662 ObjectType=groupPolicyContainer) OR (EventCode=5137 ObjectClass=groupPolicyContainer) OR (EventCode=5136 ObjectClass=groupPolicyContainer) OR (EventCode=5141 ObjectClass=groupPolicyContainer Tree_Delete=yes) | rex field=ObjectName "(?i)CN=(?<gpo_guid>{.*?})" | rex field=ObjectDN "(?i)CN=(?<gpo_guid>{.*?})" | join type=left gpo_guid [ search index=summary objectClass=groupPolicyContainer earliest=-24h@h latest=now() | stats count by cn, displayName | fields + cn, displayName | rename cn as gpo_guid ] | eval action=case(EventCode=5137, "CREATED", EventCode=5136, "MODIFIED", EventCode=5141, "DELETED") | table action, src_user, displayName, gpo_guid, ObjectGUID | rename ObjectGUID as ADDITIONAL_INFO | join max=0 type=left ADDITIONAL_INFO [ search index=audit | stats values(ATTRIBUTES_NEW_VALUE) as ATTRIBUTES_NEW_VALUE, values(ATTRIBUTES_OLD_VALUE) as ATTRIBUTES_OLD_VALUE by ADDITIONAL_INFO | fields ADDITIONAL_INFO, ATTRIBUTES_NEW_VALUE, ATTRIBUTES_OLD_VALUE]   Is my search correct? For the join to run successfully I need the search field 1 to be the same as the search field 2, correct? If the changes in GPOs are multiple, how can I get these results?  
Hello, Do you know if there are any SPLUNK recommended TAs on SteelCentral. I was looking at them in SPLUNK BASE, but couldn't find any.  Any recommendations on this would be highly appreciated. Th... See more...
Hello, Do you know if there are any SPLUNK recommended TAs on SteelCentral. I was looking at them in SPLUNK BASE, but couldn't find any.  Any recommendations on this would be highly appreciated. Thank you so much.
Just wonder how to get events after a certain time. So I don't get the older results. I tried using input time filter but cant seem to get it to work for me. Any suggestions?
While Forwarding Linux logs to Splunk I'm getting the error shown in the picture. Let me know if someone can me. I've setup Splunk Enterprise on ubuntu VM.
Hello community, I want to create a new user silently on OnPremise-AppDynamics-Controller by using the 'Create Central Identity User API' by using the command curl -H "Content-Type: application/vnd... See more...
Hello community, I want to create a new user silently on OnPremise-AppDynamics-Controller by using the 'Create Central Identity User API' by using the command curl -H "Content-Type: application/vnd.appd.cntrl+json;v=1" -X POST -d '{"email": "mynewuser@web.de", "security_provider_type": "INTERNAL", "displayName": "MyNewUser"}' -u appd_admin@customer1 https://myappd.controller.net:8181/controller/api/rbac/v1/ci-user The response is the following: Central identity user creation flag is not enabled. The user I specify when issuing the command is the admin user created when the controller was installed. This user got the following roles (requested with command api/rbac/v1/users/<id> "roles": "id": 17,"name": "Dashboard Viewer" "id": 18,"name": "Workflow Executor" "id": 20,"name": "DB Monitoring User" "id": 23,"name": "Server Monitoring User" "id": 25,"name": "Universal Agent User" "id": 14,"name": "Account Administrator" "id": 16,"name": "User"}], On Controller Web-Gui this user includes several default roles like 'Account Owner'. Creation of new users is possible there. With this user I'm able to create roles and groups via API but no users. On the API docs there is the hint: You must be the account owner or have the administer user permissions to use the Create Central Identity User API. But this I am. The Controller version is AppDynamics Controller build 21.4.16-1589 The JDK is openjdk version "1.8.0_322" OpenJDK Runtime Environment (Zulu 8.60.0.22-SA-linux64) (build 1.8.0_322-b06) OpenJDK 64-Bit Server VM (Zulu 8.60.0.22-SA-linux64) (build 25.322-b06, mixed mode) The OS is Red Hat Enterprise Linux 8.8 (Ootpa) I'm grateful for any help.
Hello Splunkers!! I have used DB connect to fetch the data from oracle database table and after ingesting the data  I see that the data of the same timestamp is breaking in different lines. But I w... See more...
Hello Splunkers!! I have used DB connect to fetch the data from oracle database table and after ingesting the data  I see that the data of the same timestamp is breaking in different lines. But I want a data of one timestamp in a single event. Eg: Here timestamp with 2023-08-08 14:35:34.849 breaked with 8 different lines.   Expected result : 2023-08-08 14:35:34.849, IDPARENT="3433794", NAME="OPERATORID", VALUE_NUMBER="1" IDPARENT="3433794", NAME="INSTANCEID", VALUE_NUMBER="900000000" IDPARENT="3433794", NAME="REASON" IDPARENT="3433794", NAME="PLANNEDQUANTITYEACHES", VALUE_NUMBER="0" ,IDPARENT="3433794", NAME="PLANNEDQUANTITY", VALUE_NUMBER="0" IDPARENT="3433794", NAME="TASKID", VALUE_NUMBER="10009113755" IDPARENT="3433794", NAME="STOREORDERNR", VALUE_TEXT="1000000432" IDPARENT="3433794", NAME="OPERATOR", VALUE_TEXT="1" Please help me how to achieve this. Is there any pertained source type available for oracle database for dB connect.?  
I want convert minutes like (1.78,1.80,1.84,1.95) to (1h:44m,1h.55m,1h.44m,1h.58m) for example we have 1 hour 95 minutes, but i want   1 hour 58 minutes This my query | stats count(eval(st... See more...
I want convert minutes like (1.78,1.80,1.84,1.95) to (1h:44m,1h.55m,1h.44m,1h.58m) for example we have 1 hour 95 minutes, but i want   1 hour 58 minutes This my query | stats count(eval(status="FAIL")) as fail_count,  sum(duration) as hours by ww,kit,endtime | eval hours = round(((hours/60)/60),2) | eval hours=round(sum(hours),2) Could you please help out this
We have customer report triggered every month with multiple panels. So the requirement is during every run Panels should have a token for month and it should change automatically based on that month ... See more...
We have customer report triggered every month with multiple panels. So the requirement is during every run Panels should have a token for month and it should change automatically based on that month run and reflect the same. Also I want to pass the month name in Subject or Message as well. How to achieve this Sample Panel Name : Top 10 Customer Applications - June 2023 Sample Subject : ABC Application Monthly Revision Report - June 2023 in the above as of now am editing month manually. Need to find a way to take on its own? Is this possible. any inputs?
Hello, I have looked over blogs and topics being discussed about Splunk's Data Integrity Checks and Anti Tampering controls, yet most of the resources found were outdated and/or not found anymore. ... See more...
Hello, I have looked over blogs and topics being discussed about Splunk's Data Integrity Checks and Anti Tampering controls, yet most of the resources found were outdated and/or not found anymore. Are there any new sources or apps that keep track of Splunk's own security from its Admins via the configuration tracker index or other means? Thanks, Best Regards,
Is any one using Alert manager 3.x on Splunk ES 8.x and is it working properly.   we tried upgrading alert manager from 2.2 to 3.1 on Splunk ES 8.1 but its not working properly ( Alert are not ge... See more...
Is any one using Alert manager 3.x on Splunk ES 8.x and is it working properly.   we tried upgrading alert manager from 2.2 to 3.1 on Splunk ES 8.1 but its not working properly ( Alert are not getting converted from "new" to "auto_assigned" . After checking with Splunk they are recommending to migrate from "Alert manager" to "Alert manager Enterprise".   Can some one suggest on this please? Thanks
Hi, I am facing issues to find delta. I have: Lookup Table: testpolicies.csv Field names in Lookup: policyname index=test sourcetype =test_sourcetype policy=* Field names :  policy Now, ne... See more...
Hi, I am facing issues to find delta. I have: Lookup Table: testpolicies.csv Field names in Lookup: policyname index=test sourcetype =test_sourcetype policy=* Field names :  policy Now, need to compare Lookup table with  sourcetype using policy field and find all the records/rows which are not exist in Lookup table but in sourcetype. This comparison is based on policy field Any recommendations will be highly appreciated. Thank you so much.
Hi Team, So i am new to splunk, therefore excuses for my stupid question. We have an Application, and multiple alerts/reports are created under it. Now there is a situation where a user fro our tea... See more...
Hi Team, So i am new to splunk, therefore excuses for my stupid question. We have an Application, and multiple alerts/reports are created under it. Now there is a situation where a user fro our team is moved to another one, so i have to get list of alerts/report that are send this user. Can i do it with a search or manually is only option.   Thanks in advance!!
Hi Team, There are 2 fields added in my search. but it's searching for same value. But i need a count of result where the search value present in field 1 and 2.     
Hi Everyone, I have a requirement to implement a search query where I have 3 unique values and one common value 3 unique values-> A, B, C 1 Common Value-> D I am doing something like (A an... See more...
Hi Everyone, I have a requirement to implement a search query where I have 3 unique values and one common value 3 unique values-> A, B, C 1 Common Value-> D I am doing something like (A and D) OR (B and D) OR (C and D)  but it is not giving any search result but it should give as  (C and D ) is true. @gcusello if is  possible can you help?
Thanks for your answer, however, we are facing an issue where there is enough space in our index but our disk space has reached around 80%. SO I just want to know if volume trimming happens on the di... See more...
Thanks for your answer, however, we are facing an issue where there is enough space in our index but our disk space has reached around 80%. SO I just want to know if volume trimming happens on the disk level as well ? Below attached are our index configuration for paloalto index and the disk status.   [firewall_paloalto] coldPath = volume:cold\firewall_paloalto\colddb homePath = volume:hotwarm\firewall_paloalto\db thawedPath = D:\splunk_data\firewall_paloalto\thaweddb tstatsHomePath = volume:hotwarm\firewall_paloalto\datamodel_summary frozenTimePeriodInSecs = 47304000 maxTotalDataSizeMB = 4294967295
Now, I deploy Splunk Enterprise, ITSI, Add-on for VMware on single node. I face the problem that "Data Collection Status" nor ITSI  can not run. Checking sourcetype=vmware:perf* OR sourc... See more...
Now, I deploy Splunk Enterprise, ITSI, Add-on for VMware on single node. I face the problem that "Data Collection Status" nor ITSI  can not run. Checking sourcetype=vmware:perf* OR sourcetype=vmware:inv:hierarchy, results is not returned. but, Data Collection Nodes and Virtual Centers is valid On Hydra Framework Status in Add-on VMware, there is a lot of Worker Errors and Scheduler Errors. like this.         2023-08-08 15:55:13,521 ERROR [ta_vmware_collection_worker://alpha:182108] Problem with hydra worker ta_vmware_collection_worker://alpha:182108: [HydraGatewayAdapter] could not authenticate with gateway after 3 retries 2023-08-08 15:55:25,725 ERROR [ta_vmware_collection_scheduler://puff] [HydraWorkerNode] [establishGateway] could not authenticate with gateway=https://spl-itsi.infra.local:8008 for node=https://spl-itsi.infra.local:8089 due to error="[HydraGatewayAdapter] could not authenticate with gateway after 3 retries", marking node as dead 2023-08-08 15:55:02,591 ERROR [ta_vmware_collection_scheduler://puff] [HydraCollectionManifest] Attempted to assign jobs but we have no active workers to assign to. Restarting Scheduler...         my environment is below splunk enterprise : 9.0.0, ITSI : 4.17.0 Addon4VMware : 4.0.5, VMware Indexes : 4.0.3, vCenter Logs, VMware ESXi Logs : 4.2.1, VMware Extractions : 4.0.3 ※ I don't use Add-on for VMware Metrics. VM information is below OS : rhel 9(64bit), 40vCPU, 40GB, 600GB   What should I do? I need splunk OVA for VMware Metrics? I don't use it hopely  
Hello, Last month, I have started my Splunk training from the Splunk official website, and  Splunk certification. As I've been progressing through the course, I find myself challenged by some specif... See more...
Hello, Last month, I have started my Splunk training from the Splunk official website, and  Splunk certification. As I've been progressing through the course, I find myself challenged by some specific topics that aren't entirely clear to me. For instance, I'm having trouble understanding the detailed functionalities of Splunk's Search Processing Language (SPL) and how to properly configure dashboards and alerts. Could anyone here provide some insights or resources that might help me grasp these concepts better? Any advice from those who have already taken the certification would be greatly appreciated! I already read this : https://www.splunk.com/en_us/blog/platform/flatten-the-spl-learning-curve-introducing-splunk-ai-assistant-for-spl.html and many blogs. but my query is not solve . Can anyone please suggest some resources for this. Thank you in advance.  
Hello Splunkers ! Context : I want to deploy Splunk conf to monitor Unix system logs. Let's suppose I have two groups of servers (group A and group B) and I want to monitor different folders/files... See more...
Hello Splunkers ! Context : I want to deploy Splunk conf to monitor Unix system logs. Let's suppose I have two groups of servers (group A and group B) and I want to monitor different folders/files depending on the groupe of server. For that use case I would be temped to use the official Splunk Nix TA app and a Deployment Server to distribute the app. The thing is I cannot deployed the same TA on the two groups since I want to configure different local/inputs.conf depending on the server's group. How would you do that ? My idea was to deploy the Splunk TA Nix without modification (no edit of local/* files) and create two other different apps with only the configuration on inputs for the necessary group. At the end I would end up with : - Servers of group A : default Splunk TA Nix + Custom app for inputs A - Servers of group B : default Splunk TA Nix + Custom app for inputs B What do you think of this approach ? Thanks, GaetanVP
Hi, I just notice a strange behavior in Splunk Identity management and the datamodel.  Indeed, if I make a search based on "index + sourcetype", my results include all identity information when t... See more...
Hi, I just notice a strange behavior in Splunk Identity management and the datamodel.  Indeed, if I make a search based on "index + sourcetype", my results include all identity information when the user is known.   But when I execute the same search based on the datamodel (Web in my example), I only have the information that are specifically mention in the data model     I don't understand that behavior .. the goals of Splunk ES is to use the DM as much as we can't but we lose information...   What am I missing ?  How can I retrieve all the Identity (and Asset) information with a datamodel search ?    Thanks in advance Xavier