All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I'd like to transpose a table results by grouping by columns. Here is my table time1 event1 time2 event2 time3 event3 01/01/2022 titi 02/01/2022 toto 04/01/2... See more...
Hello, I'd like to transpose a table results by grouping by columns. Here is my table time1 event1 time2 event2 time3 event3 01/01/2022 titi 02/01/2022 toto 04/01/2022 tata   I'd like to transpose this structure in this way time content 01/01/2022 titi 02/01/2022 toto 04/01/2022 tata   I didn't find a way to solve this Thans in advance
Hello, It is possible to send metrics to event index? For instance indexing df_metric from Splunk_TA_nix Thanks.  
Hello everyone, I have following type of data to analyze: timestamp endpoint executionTime 08:12 /products 0.3 08:20 /products 0.8 08:25 /users 0.5 0... See more...
Hello everyone, I have following type of data to analyze: timestamp endpoint executionTime 08:12 /products 0.3 08:20 /products 0.8 08:25 /users 0.5 08:41 /users 1.0 08:50 /products 0.7   I would like to display information about slowest endpoint in each 30 minute window, in this example it would look like: timeWindow timestamp endpoint maxExecutionTime 08:00 08:20 /products 0.8 08:30 08:41 /users 1   It's fairly easy to gather data on maximum execution time only and so I created such a query:     index = myindex | timechart span=30m max(executionTime) as maxExecutionTime     but now I have no idea how to attach endpoint called and actual timestamp. How should I do it?
Hi Guys,   Need some help with setting up Multisite Indexer Clustering. We have two DataCenters A&B. Below is the server architecture for these datacenters: DATACENTER A We have 3 Search Head... See more...
Hi Guys,   Need some help with setting up Multisite Indexer Clustering. We have two DataCenters A&B. Below is the server architecture for these datacenters: DATACENTER A We have 3 Search Heads : SH-A,SH-B,SH-C (in a Search head cluster) and we have 2 Indexers: IDX-1, IDX-2   DATACENTER B We have 3 Disaster Recovery Search Heads: SH-A-DR,SH-B-DR,SH-C-DR (in a Search head cluster) and 2 Indexers:IDX-3, IDX-4   Now, We want to setup Indexer clustering in such a way that IDX—1 and IDX 3 are clustered IDX-2 and IDX 4 are clustered So that SH-A,B,C (in DC A) can search IDX-1 and IDX-2 While during DR SH-A-DR,B-DR,C-DR (in DC B) can search IDX 3 and IDX 4. What would be the best way to get this setup done? Do we need to setup 2 Cluster Masters? If yes, then how to setup Search Head cluster with 2 Cluster Masters. Please suggest.   Thanks, Neerav
Hello everyone ! I'm trying to split a single multivalue event into multiple multivalue events. Here is my base search : sourcetype="xxxx" | transaction clientip source id maxspan=5m star... See more...
Hello everyone ! I'm trying to split a single multivalue event into multiple multivalue events. Here is my base search : sourcetype="xxxx" | transaction clientip source id maxspan=5m startswith="yesorno=yes" endswith="event=connected" keepevicted=true mvlist=true,responsetime,status,yesorno,clientip,event,_time | sort _time | eval MergedColumns=responsetime . " " . yesorno | stats list(event) as event, list(MergedColumns) as MergedColumns, list(responsetime) as responsetime, by yesorno, clientip, id | where !(event=="connected") | table MergedColumns source clientip Unfortunately, i am obliged to use a transaction here and not the stats command. Here is my data : MergedColumns source clientip 10 yes 510 no 348 no 50886 no username1 xxx.xxx.xxx.xxx 10 yes 513 no 1239 no 9 yes 160 no 340 no 21421 no 509 no 685 no 13799 no 149 no username2 xxx.xxx.xxx.xxx I would like to split my event on the "xxx yes" like so : MergedColumns source clientip 10 yes 510 no 348 no 50886 no username1 xxx.xxx.xxx.xxx 10 yes 513 no 1239 no username2 xxx.xxx.xxx.xxx 9 yes 160 no 340 no 21421 no 509 no 685 no 13799 no 149 no username2 xxx.xxx.xxx.xxx Moreover, here, i have only two "xxx yes" in the same multivalue event, but i can possibly have more than that (like 3 or 4). I tried lots of things but none seems to work... (here is the regex to extract "xxx yes" => "^\S{1,} yes$") In fact, adding this : | mvexpand MergedColumns | regex MergedColumns="^\S{1,} success" | table MergedColumns source clientip This above seems to split my values correctly, however, it removes all the remaining "xxx no" values. Does anyone have a solution ? Kind regards,
Intro The client upgraded their Oracle DB from v12.1.0.2 to v19.15.0.0.0. The client DBAs were experiencing issues with load and caching on their DB and found queries run by the service account the... See more...
Intro The client upgraded their Oracle DB from v12.1.0.2 to v19.15.0.0.0. The client DBAs were experiencing issues with load and caching on their DB and found queries run by the service account the AppD Agent runs, to be a culprit, so they asked for the AppD DB Agent to be upgraded. Initial action Upgraded the prod, on-prem Windows DB agent from v21.2.0.2285 to v22.6.0.2803(latest available at the time). Issue 2 of the Oracle DBs stopped reporting the DB Load metrics consistently. The DB agent logs have no Error or Warning logs related to these 2 DB Collectors that can be troubleshot.  There are several other Oracle DBs on the same version, the same controller, and agents that are not experiencing the issue.  All other DB metrics are reporting in.  Further testing Tested in pre-prod by having the pre-prod DB agent (same new version as prod) and pre-prod controller(SaaS, v22.5.0-662) monitor the 2 problematic prod Oracle DBs. The same issue is present. No Load metrics. (points to an agent version issue) Monitored pre-prod equivalent Oracle DB (Same oracle version, different DB), but it does not experience the issue. (Issue isolated to specific DBs) Tested by monitoring the problematic DBs with a completely different DB agent that matches the older version that was updated away from and the issue is not present. Shows again that it's related to the new agent version.  Conclusion  The client could not carry on with missing Load metrics for DBs in question so the agent was rolled back to the older version that did not have this issue. AppD support has 2 theories on this and is still asking for more queries to be run against the prod DBs as they experience the issue, but this only happening to the prod DBs and we cannot recreate it in pre-prod, so it's not a quick thing to do. (Breaking the prod DB monitoring just to wait for the issue and then run queries) Theory 1: DB agent is not able to query the DB Theory 2: DB agent is not able to send all metrics to SaaS Controller The screenshot below shows the Load metrics not reported consistently for a busy prod Oracle DB.  Shows Load metrics not reporting in I am hoping someone else might have come across this issue as well and has a possible solution, or more evidence pointing to a possible DB agent version bug.  *Second time I created this post because my first one just went missing after I submitted it.
Hi   i have a curious problem. (btw. not my first Powershell input )  I am trying to Input some Active Directory Data into Splunk right now. Below a bit changed output of my Script:     ... See more...
Hi   i have a curious problem. (btw. not my first Powershell input )  I am trying to Input some Active Directory Data into Splunk right now. Below a bit changed output of my Script:        [ { "SpecialUsers_S": false, "SpecialUsers_X": false, "SpecialUsers_U": false, "SpecialUsers_A": false, "SpecialUsers_TBM": false, "SpecialUsers_T": false, "HR_Canceled_Users": false, "HR_Inactive_Users": false, "HR_Temporary-Inactive_Users": false, "FehlerStatus": "0", "PasswordNeverExpires_State": "null", "OU_State": "null", "Account_State": "null", "Manager_State": "null", "Account_Expiration_Date": "null", "EmployeeNumberError": "null", "DescriptionError": "null", "ManagersViaGroup": "null", "Wrong_Name": "null", "Wrong_EMail": "null", "Manager_Description": "null", "Multiple_SpecialGroups": "null", "Multiple_HR_Groups": "null", "SamAccountName": "SamAccount01", "Enabled": true, "EmployeeNumber": "11112", "SN": "Surname01", "Description": "0200000000", "Department": "Department01", "Company": "The Firm", "emailaddress": "Email01@domain.com", "DistinguishedName": "The Distinguished Name 01", "hkDS-EntryDate": "09.09.1991 02:00:00", "LastLogonDate": "18.07.2022 07:22:38", "PasswordLastSet": "02.06.2022 09:22:36" }, { "SpecialUsers_S": false, "SpecialUsers_X": false, "SpecialUsers_U": false, "SpecialUsers_A": false, "SpecialUsers_TBM": false, "SpecialUsers_T": false, "HR_Canceled_Users": false, "HR_Inactive_Users": false, "HR_Temporary-Inactive_Users": false, "FehlerStatus": "0", "PasswordNeverExpires_State": "null", "OU_State": "null", "Account_State": "null", "Manager_State": "null", "Account_Expiration_Date": "null", "EmployeeNumberError": "null", "DescriptionError": "null", "ManagersViaGroup": "null", "Wrong_Name": "null", "Wrong_EMail": "null", "Manager_Description": "null", "Multiple_SpecialGroups": "null", "Multiple_HR_Groups": "null", "SamAccountName": "SamAccount02", "Enabled": true, "EmployeeNumber": "11113", "SN": "Surname02", "Description": "000000000", "Department": "Department02", "Company": "The Firm", "emailaddress": "email02@Domain.com", "DistinguishedName": "The Distinguished Name 01", "hkDS-EntryDate": "10.10.2002 02:00:00", "LastLogonDate": "18.07.2022 08:07:31", "PasswordLastSet": "26.05.2022 17:27:42" } ]        Exported into File and testet with Validators all is fine.  But what i see in Splunk is:        "SpecialUsers_S": false, "SpecialUsers_X": false, "SpecialUsers_U": false, "SpecialUsers_A": false, "SpecialUsers_TBM": false, "SpecialUsers_T": false, "HR_Canceled_Users": false, "HR_Inactive_Users": false, "HR_Temporary-Inactive_Users": false, "FehlerStatus": "0", "PasswordNeverExpires_State": "null", "OU_State": "null", "Account_State": "null", "Manager_State": "null", "Account_Expiration_Date": "null", "EmployeeNumberError": "null", "DescriptionError": "null", "ManagersViaGroup": "null", "Wrong_Name": "null", "Wrong_EMail": "null", "Manager_Description": "null", "Multiple_SpecialGroups": "null", "Multiple_HR_Groups": "null", "SamAccountName": "SamAccount01", "Enabled": true, "EmployeeNumber": "null", "SN": "", "Description": "null", "Department": "null", "Company": "", "emailaddress": null, "DistinguishedName": "The Distinguished Name", "hkDS-EntryDate": "null", "LastLogonDate": "null", "PasswordLastSet": "null" }         As u can see i am missing a lot of information, and i cant figure out why... Some like SamAccountName and DistinguishedName is working but other variables like Company, Department or Description are missing...  Skript is rather long but if needed i can post Parts of it how i do stuff   the inputs.conf for this is:        [powershell://Get_AD_Report] script = . "$SplunkHome\etc\system\bin\Powershell\GetADReport.ps1" schedule=15 * * * * sourcetype=_json index=hk_office365         Maybe someone as some kind of clue whats happening there for me?  Would really help am on this for much to long already and tried so many different ways now... 
We are working on a table creation, where in we are just passing the SPL query to the splunk JS, which populates the table in the UI. >My problem is that im not able to hold the table headers ,when... See more...
We are working on a table creation, where in we are just passing the SPL query to the splunk JS, which populates the table in the UI. >My problem is that im not able to hold the table headers ,when scrolled it ,it does not stay. >Also if I scroll horizontally I have to go all the way to the end of the table layout to scroll horizontally. Any help or suggestion will help greatly.    Thanks, Jabez.  
Hello, In my classic dashboards, I have created input fields with in my panel so that I can use them only for that specific panel. However, with dashboard studio I am able to create inputs at global... See more...
Hello, In my classic dashboards, I have created input fields with in my panel so that I can use them only for that specific panel. However, with dashboard studio I am able to create inputs at global level only and I am unable to place them next to my panels unlike classic version. Can anyone help me, if you already tried achieving this? Thank You.
How to include Javascript files from another Javascript file from the local appserver/static folder The below code somehow does not work:   require([ 'jquery', './myCustomUtils', 's... See more...
How to include Javascript files from another Javascript file from the local appserver/static folder The below code somehow does not work:   require([ 'jquery', './myCustomUtils', 'splunkjs/mvc', 'underscore', 'splunkjs/mvc/simplexml/ready!'], function($, myCustomUtils, mvc, _){  
Dear experts, I've created an alert based on a message string to identify closed connections . However, alert gets triggered only once although the problem doesn't get fixed until we bounce. Look... See more...
Dear experts, I've created an alert based on a message string to identify closed connections . However, alert gets triggered only once although the problem doesn't get fixed until we bounce. Looking for a query to have an recurring alert, until I see success message string as "*reconfigured with 'RabbitMQ' bean*" as the latest in comparison to the failed strings across all events. Failed messages:  *com.rabbitmq.client.ShutdownSignalException* OR "*"channel shutdown*" Success message: "*reconfigured with 'RabbitMQ' bean*" Current Alert query that occurs only once: index IN ("devcf","devsc") cf_org_name IN(xxxx,yyyy) cf_app_name=* "rabbit*" AND ("channel shutdown*" OR "*com.rabbitmq.client.ShutdownSignalException*" OR "*rabbitmq.client.AlreadyClosedException: channel is already closed due to channel error*") |stats count by cf_app_name, cf_foundation Thank you for the help
Hi,  We are doing consults via API with the query:  index=_internal sourcetype=splunk_python subject |fields subject,recipients | where subject like "%ORION%"' But when we try to collect the re... See more...
Hi,  We are doing consults via API with the query:  index=_internal sourcetype=splunk_python subject |fields subject,recipients | where subject like "%ORION%"' But when we try to collect the results, give us an error from a lookup that is not part of the original search. Via web the search is working fine, but the problem is via API. Have you experienced something like that? Thanks!
How do I integrate AppDynamics with ServiceNow without using any plugin. Can I integrate AppDynamics with ServiceNow using custom rest API and how?
hai all, while searching splunk roles data using rest API | rest /services/authentication/users splunk_server=local is there any way to create dashboard for to check current login users into splu... See more...
hai all, while searching splunk roles data using rest API | rest /services/authentication/users splunk_server=local is there any way to create dashboard for to check current login users into splunk? Thanks
Hi all, We have a case where we want to restrict a user from editing or updating  a input parameter . We have created a addon using Splunk's Add-on builder (v4.x), the addon takes couple of Data ... See more...
Hi all, We have a case where we want to restrict a user from editing or updating  a input parameter . We have created a addon using Splunk's Add-on builder (v4.x), the addon takes couple of Data Input Parameters, which include a parameter that has to be disabled and should not be allowed to be updated. Is there a way we can update the code or does addon builder provide a option to perform such functionality? Any help or suggestions will be highly appreciated.   Thanks, Jabez.
Hello Splunk Community, I have the following search command:   index="myIndex" host="myHost" myScript Running OR Stopped | eval running= if(like(_raw, "%Running%"), 1, 0) | eval stopped= if(li... See more...
Hello Splunk Community, I have the following search command:   index="myIndex" host="myHost" myScript Running OR Stopped | eval running= if(like(_raw, "%Running%"), 1, 0) | eval stopped= if(like(_raw, "%Stopped%"), 0, 1) | table _time running stopped | rename running AS "UP" | rename stopped AS "DOWN"   It looks strange like this: There are four events with "Stopped" in it, the rest are all "Running". The script logs either Running or Stopped every 5 minutes. When I hover over the line it reports Down as 1 the entire time even though it should be 0 and only 1 four times.  How do I adjust this so that it looks like this: -------------_---------------- ________--_________ Where the upper line = Running And the bottom line = Stopped        
  Good morning all please i'm in a big das that i can't solve it: i'm a student and i'm preparing my graduation project and it's my first time with splunk I want to know if my steps are correct o... See more...
  Good morning all please i'm in a big das that i can't solve it: i'm a student and i'm preparing my graduation project and it's my first time with splunk I want to know if my steps are correct or not I want to analyze the user accounts of my active directory: I want to work only on the information concerning the connection of the accounts (login, log off...) and also (creation, modification, deletion..) for that I installed on my splunk server the 3 apps: Splunk_TA_windows Splunk_TA_microsoft_ad SA-ldapsearch (I don't know why I can't save the domain password on this add on despite the connection being successful) after that I copied the 2 folders "Splunk_TA_windows" and "Splunk_TA_microsoft_ad" to my AD server in forrwadersplunk folder path after that I configured this input file and I copied it to a new "local" folder on the 2 servers ************************ ###### Monitor Inputs for Active Directory ###### [monitor://C:\debug\netlogon.log] sourcetype=MSAD:NT6:Netlogon disabled=0 renderXml=false index=main [WinEventLog://Security] disabled = 0 index=main start_from oldest current_only = 0 evt_resolve_ad_obj = 1 Interval checkpoint = 5 whitelist=4724,4725,4726,4624,4625,4720,4732,4722,4738,4742,4729,4715,4719,4768,4769 blacklist1 = EventCode="4662" Message="Object Type: (?!\s*group Policy Container)" blacklist2 = EventCode="566" Message="Object Type: (?!\s*group PolicyContainer)" renderXml=false [WinEventLog://Microsoft-windows-Terminalservices-LocalSessionManager/operational] disabled = 0 index=main renderXml=false ****************** Am I missing another step?? is the input file configuration correct?? can I have my needs with this configuration ??? thank you for answering me because I can not find the right answer on the net and I have a big problem: I find incomplete information on some users when I launch searches concerning their opening and closing of sessions. I apologize for this long message but I must explain all the details to you to have the best advice
Hi ,  Noticed this failure in the app inspect report(Version 2.22.0), Is there a way we can fix this on splunk cloud ? Below is the failure details in the report: Please check for inbound or ou... See more...
Hi ,  Noticed this failure in the app inspect report(Version 2.22.0), Is there a way we can fix this on splunk cloud ? Below is the failure details in the report: Please check for inbound or outbound UDP network communications.Any programmatic UDP network communication is prohibited due to security risks in Splunk Cloud and App Certification.The use or instruction to configure an app using Settings -> Data Inputs -> UDP within Splunk is permitted. (Note: UDP configuration options are not available in Splunk Cloud and as such do not impose a security risk. File: bin/botocore/session.py Line Number: 204   Thanks, Jabez.  
hello,  sendemail can not work variable fields. example,     index=mail | table id domain | eval email=id."@abc.com" | sendemail to="$email$" subject="test" sendresult=true inline=true mess... See more...
hello,  sendemail can not work variable fields. example,     index=mail | table id domain | eval email=id."@abc.com" | sendemail to="$email$" subject="test" sendresult=true inline=true message="test"     >> command="sendemail", {} while sending mail to:       index=_internal email     >> ERROR sending email. subject="test", results_line="None", recipients="[]", server="localhost"   why can't I identify my email address? it works normally when i enter email address.  
how do I change the x axis label rotation in dashboard studio? I added the following line to the visualizations options but nothing changes "xAxisLabelRotation" : 90