All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, We have an index indexA, which gets data from multiple agencies agentA, agentB, agentC, and another index indexB, which has only agentB data. Our requirement is to correlate between indexA ... See more...
Hi All, We have an index indexA, which gets data from multiple agencies agentA, agentB, agentC, and another index indexB, which has only agentB data. Our requirement is to correlate between indexA and indexB to restrict the users to have access only to agentB data from both the indexes indexA and indexB. I tried below options: a. search filter using lookup  as below,  index=indexA  | search [ | inputlookup indexB_lookup.csv | fields indexB_agentB | rename indexB_agentB as indexA_agentB] b. coreleation search as below, (index=indexA  sourcetype=indexA_sourcetype ) OR ( index=indexB sourcetype=indexB_sourcetype) | fields indexA_agentB indexB_agentB sourcetype | eval agentB = coalesce(indexA_agentB,indexB_agentB) | stats dc(sourcetype) as dc_sourcetype values(sourcetype) as sourcetype values(indexA_agentB_raw) values(indexB_agentB_raw) by agentB | where dc_sourcetype=2 But, both the method did not work as the search filter allows the only host, source, source type, event type, search fields. Kindly let me know for any better option to restrict the user access only to agentB data from both the indexes.
Hello everyone! When I searched in the search header, I used the earliest, latest command, but it didn't work. Usually the earliest and latest commands have priority over the time range on the righ... See more...
Hello everyone! When I searched in the search header, I used the earliest, latest command, but it didn't work. Usually the earliest and latest commands have priority over the time range on the right side of the GUI. I ran the same search in the indexer just in case, but the indexer works with the earliest and latest commands. I really want to find a solution to this problem. Thanks.
Hi all, I've noticed that the last Universal Forwarder for FreeBSD available is for 11.3 but FreeBSD is now up to 13.0. I know that FreeBSD is a bit of a dying space except in appliances where tool... See more...
Hi all, I've noticed that the last Universal Forwarder for FreeBSD available is for 11.3 but FreeBSD is now up to 13.0. I know that FreeBSD is a bit of a dying space except in appliances where tools exist to manage log output as syslog on those appliances, ie pfsense, opnsense, Truenas, etc.  Can I reasonably assume then that there is not likely to be any new versions of the Universal Forwarder to support later versions of FreeBSD for this reason, ie FreeBSD is only showing up in appliances that can do syslog forwarding?
Hi all, I'm new to splunk so I hope I'm just missing a step or something. I've searched for a while and still am not sure what I'm doing wrong. I have splunk enterprise running on one server. I hav... See more...
Hi all, I'm new to splunk so I hope I'm just missing a step or something. I've searched for a while and still am not sure what I'm doing wrong. I have splunk enterprise running on one server. I have configured it to receive data via port 9997 through the "Forwarding and Receiving" settings page. I have installed a Universal Forwarder on another server. I added a forward-server (side note: Can you pass in the group name via CLI or is it only editable in the outputs.conf file? I can't find the full options list) and verified it in the /etc/system/local/outputs.conf file. It is using the defaultGroup = default-autolb-group. I then added a monitor on /var/log. The commands: ./splunk add forward-server <host name or ip address>:<listening port> ./splunk add monitor /var/log ./splunk restart   This is where I'm confused. I created an indexer on the Enterprise named 'default-autolb-group' to capture the data but it does not populate this indexer. However, if I go to Apps > Search & Reporting, and filter by index=_internal, I see some info from the server where my universal forwarder is installed. The latest message was after a restart and is listing the cores, RAM, etc. So data is coming through from the server but it's not going where I expect. What am I missing?
I developed a Splunk app and it's on Splunk base. I need to make it compatible for Splunk cloud. What is the criteria for getting the cloud badge on my current app that is on Splunk base? Thank you,... See more...
I developed a Splunk app and it's on Splunk base. I need to make it compatible for Splunk cloud. What is the criteria for getting the cloud badge on my current app that is on Splunk base? Thank you, Marco
I have two regex and want to view both as like new name URLs and both displays on the same column. Is there a way to combine it?
I'm trying to create a query that basically says:   Show me events that contain A, B, C or D where the latest is A or B.   I believe I  could do this with a subsearch: "A" or "B" randomfield=X [... See more...
I'm trying to create a query that basically says:   Show me events that contain A, B, C or D where the latest is A or B.   I believe I  could do this with a subsearch: "A" or "B" randomfield=X [search ("A" or "B" or "C" or "D") randomfield=X | head 1] I know the first part pulls the right data, and the 2nd part pulls the right data, I just can't get them both to return the one result that I want.   I also tried this as a transaction: "A" OR "C"randomfield=X | transaction startswith="A" endswith="C" keepevicted=t | search closed_txn=0 | stats count by randomField   But I realized there are more than just one possible start and one possible end.   I just want to make sure that the LAST result from a list of specific events is a smaller list of specific events.   Thanks!
Greetings:   In search of Cisco sampling logs with the sourctype=cisco_wsa_squid to  sharpen my spl . Can any one point me to a location of such log for download ?
Does anyone provide me some information on doing a lift and shift from on-prem into Azure? Some services will still be on-prem and some would be in Azure. Would we just match our current On-Prem Splu... See more...
Does anyone provide me some information on doing a lift and shift from on-prem into Azure? Some services will still be on-prem and some would be in Azure. Would we just match our current On-Prem Splunk Requirements to Azure resources?  What are some best practices for forwarding data from a hybrid environment? Could we send everything to Azure Monitor and then forward it to Splunk and using index and search head clusters.   The Splunk architecture is using Linux. We have some heavy and universal forwarders sending out data to Splunk.  Thank you!    
Is there a way to list the deployment clients connected to my deployment server that have ever connected to it even once? I've found queries to display it and obviously this information is located on... See more...
Is there a way to list the deployment clients connected to my deployment server that have ever connected to it even once? I've found queries to display it and obviously this information is located on the gui under Forwarder Management > Clients - But, the list of clients resets when you restart splunk on the deployment server. This is making it difficult to keep track of which clients are properly phoning home while I'm making changes to configuration files (which require a splunk restart to go into effect). When splunk is reset, it repopulates the clients list with the ones who are actively phoning home but there is no way to tell if a client should be but isn't because they were erased from this list from the restart. If there is a way that I can find a list of any client that has ever connected, to easier know which clients I have in this environment so far, that would help a lot. Any input at all is appreciated. Thanks !
Hi, Sorry this could be a bit of a newb question, but I've spent a good few hours on this one and haven't managed to work it out. I've got some linux VMs that are all accessible through port transla... See more...
Hi, Sorry this could be a bit of a newb question, but I've spent a good few hours on this one and haven't managed to work it out. I've got some linux VMs that are all accessible through port translations in my VM configuration for ssh and splunk web. What I'm trying to do is create an indexer cluster with 3 indexing peers, a cluster master and a search head so I've been following the instructions in the splunk docs. So I configure the master, then move on to the slaves. What happened was, as soon as I restarted splunk on those slaves, they would come back up but with no splunk web. I just end up with this forever: Waiting for web server at http://127.0.0.1:8000 to be available................................... As soon as I change it back to being standalone (not an index slave and no peering), then restart, it comes straight up no problem. Thought I might have configured it incorrectly, but I've done it through the gui, the cli and editing the server.conf file and get the same each time. Thought a port conflict, but couldn't see one. Tried changing them anyway and no luck. Doesn't seem to be permissions either as starting as root has same outcome. I did see one article here that suggested that you can't run splunkweb on a cluster member, but the docs seems to say you should be able to jump straight back onto it once it comes back up from restart. Looking for some advice on this before spending any more time on it. Nothing at all in the linux or splunk logs. Thanks.
Hello, How I would write my Props Configuration (Tme Prefix, Time Format,  LINE/EVENT Breaker...etc) for following HTML data source. A segment of HTML data from source file  provided  below. Any hel... See more...
Hello, How I would write my Props Configuration (Tme Prefix, Time Format,  LINE/EVENT Breaker...etc) for following HTML data source. A segment of HTML data from source file  provided  below. Any help will be highly appreciated. Thank you so much. <HTML><META HTTP-EQUIV="expires" CONTENT="0"> <HEAD><TITLE></TITLE></HEAD> <STYLE type=text/css> td , th { white-space:nowrap;font-family: sans-serif; font-size: 10px } html,body { height:100% } .qtw100 td,.qthw100 td { padding:0px;} .qtw100 { width:100%; } .qthw100 { width:100%;height:100%; } .spnode,.nspnode { text-align:center;border-style:inset; } .spnode { border-left-width:10px;border-bottom-width:10px; } .hd   { background-color:#FFFFFF;text-align:right; } .hdw { background-color:#FFFFFF;width:1%; } .CD0D0D0 { background-color:#D0D0D0; color:#D0D0D0; } .C00CC00 { background-color:#00CC00; color:#00CC00; } .CCCCC00 { background-color:#CCCC00; color:#CCCC00; } .CFFFFFF { background-color:#FFFFFF; color:#FFFFFF; } .C66FFFF { background-color:#66FFFF; color:#66FFFF; } .CFF0000 { background-color:#FF0000; color:#FF0000; } .CFFFF00 { background-color:#FFFF00; color:#FFFF00; } .C00FF00 { background-color:#00FF00; color:#00FF00; } .CFF00FF { background-color:#FF00FF; color:#FF00FF; } .HFFFFFF { background-color:#FFFFFF; text-align:center; } .H66FFFF { background-color:#66FFFF; text-align:center; } .HFF0000 { background-color:#FF0000; text-align:center; } .HFFFF00 { background-color:#FFFF00; text-align:center; } .H00FF00 { background-color:#00FF00; text-align:center; } .HFF00FF { background-color:#FF00FF; text-align:center; } .condtiming { display: none; position: absolute; width: 100% } .cpu_us { background-color:#00FF00;color:#00FF00;font-size:1px; } .cpu_ss { background-color:#FF0000;color:#FF0000;font-size:1px; } .cell_1px { background-color:#FFFFFF;font-size:1px; } .a_html { background-color:#FFFFFF;color:#FFFFFF;border:1px solid #FFFFFF; } </STYLE> <SCRIPT type="text/javascript" language="JavaScript"><!-- function HideDIV(d) { document.getElementById(d).style.display = "none"; } function ShowDIV(d) { document.getElementById(d).style.display = "block"; } //--></SCRIPT> <BODY LINK=BLACK VLINK=BLACK> <B>SAP </B>&reg;<B> IQ </B>Query Plan<BR> <B>Query: </B><BR> <B>Version: </B>16.1.040.1549/14760/P/SP04.08/Sun_Sparc/OS 5.11/64bit/2020-11-24 01:09:36 <P ALIGN=LEFT><B>Query Tree</B> <TABLE class="qtw100" BORDER=0 CELLSPACING=0 ALIGN=CENTER> <TR><TD ALIGN=CENTER COLSPAN=3><TABLE class="qthw100" BORDER=0 CELLSPACING=0><TR><TD WIDTH=50%></TD><TD BGCOLOR=BLACK>|||||</TD><TD>&nbsp;</TD><TD WIDTH=50%>3,677,556,487,906 rows (est.)</TD></TR></TABLE></TD></TR> <TR VALIGN=TOP>   <TD COLSPAN=3 ALIGN=CENTER>    <TABLE BORDER CELLSPACING=0><TR><TD BGCOLOR=#CCAACC class="nspnode"><A NAME=TREE07><A HREF=#07>#07</A> Root of an UPDATE</TD></TR></TABLE>   </TD> </TR> <TR><TD ALIGN=CENTER COLSPAN=3><TABLE class="qthw100" BORDER=0 CELLSPACING=0><TR><TD WIDTH=50%></TD><TD BGCOLOR=BLACK>|||||</TD><TD>&nbsp;</TD><TD WIDTH=50%>3,677,556,487,906 rows (est.)</TD></TR></TABLE></TD></TR> <TR VALIGN=TOP>   <TD COLSPAN=3 ALIGN=CENTER>    <TABLE BORDER CELLSPACING=0><TR><TD BGCOLOR=#AAFFFF class="nspnode"><A NAME=TREE40><A HREF=#40>#40</A> Parallel Combiner (ordered)</TD></TR></TABLE>   </TD> </TR> <TR><TD ALIGN=CENTER COLSPAN=3><TABLE class="qthw100" BORDER=0 CELLSPACING=0><TR><TD WIDTH=50%></TD><TD BGCOLOR=BLACK>|||||</TD><TD>&nbsp;</TD><TD BGCOLOR=BLACK>|||||</TD><TD>&nbsp;</TD><TD WIDTH=50%>3,677,556,487,906 rows (est.)</TD></TR></TABLE></TD></TR> <TR VALIGN=TOP>   <TD COLSPAN=3 ALIGN=CENTER>    <TABLE BORDER CELLSPACING=0><TR><TD BGCOLOR=#CCFFFF class="nspnode"><A NAME=TREE135><A HREF=#135>#135</A> Order By</TD></TR></TABLE>   </TD> </TR> <TR><TD ALIGN=CENTER COLSPAN=3><TABLE class="qthw100" BORDER=0 CELLSPACING=0><TR><TD WIDTH=50%></TD><TD BGCOLOR=BLACK>|||||</TD><TD>&nbsp;</TD><TD BGCOLOR=BLACK>|||||</TD><TD>&nbsp;</TD><TD WIDTH=50%>3,677,556,487,906 rows (est.)</TD></TR></TABLE></TD></TR> <TR VALIGN=TOP>   <TD COLSPAN=3 ALIGN=CENTER>    <TABLE BORDER CELLSPACING=0 WIDTH=100%><TR><TD BGCOLOR=#CCCCAA class="nspnode"><A NAME=TREE03><A HREF=#03>#03</A> Join (Sort-Merge)</TD></TR></TABLE>   </TD> </TR> <TR VALIGN=TOP>   <TD ALIGN=CENTER>    <TABLE class="qtw100" BORDER=0 CELLSPACING=0 ALIGN=CENTER>     <TR><TD ALIGN=CENTER COLSPAN=1><TABLE class="qthw100" BORDER=0 CELLSPACING=0><TR><TD WIDTH=50%></TD><TD BGCOLOR=BLACK>||||</TD><TD>&nbsp;</TD><TD BGCOLOR=BLACK>||||</TD><TD>&nbsp;</TD><TD WIDTH=50%>247,522,712 rows (est.)</TD></TR></TABLE></TD></TR>     <TR VALIGN=TOP>      <TD COLSPAN=1 ALIGN=CENTER>       <TABLE BORDER CELLSPACING=0><TR><TD BGCOLOR=#CCFFFF class="nspnode"><A NAME=TREE168><A HREF=#168>#168</A> Order By</TD></TR></TABLE>      </TD>     </TR>     <TR><TD ALIGN=CENTER COLSPAN=1><TABLE class="qthw100" BORDER=0 CELLSPACING=0><TR><TD WIDTH=50%></TD><TD BGCOLOR=BLACK>||||</TD><TD>&nbsp;</TD><TD BGCOLOR=BLACK>||||</TD><TD>&nbsp;</TD><TD WIDTH=50%>247,522,712 rows (est.)</TD></TR></TABLE></TD></TR>     <TR VALIGN=TOP>      <TD COLSPAN=1 ALIGN=CENTER>       <TABLE BORDER CELLSPACING=0><TR><TD BGCOLOR=#FFCCFF class="nspnode"><A NAME=TREE01><A HREF=#01>#01</A> Leaf &lt;cdwsa.IRDBM_F1095B_17 AS a&gt;</TD></TR></TABLE>      </TD>     </TR>    </TABLE>   </TD>   <TD>&nbsp;&nbsp;</TD>   <TD ALIGN=CENTER>    <TABLE class="qtw100" BORDER=0 CELLSPACING=0 ALIGN=CENTER>     <TR><TD ALIGN=CENTER COLSPAN=1><TABLE class="qthw100" BORDER=0 CELLSPACING=0><TR><TD WIDTH=50%></TD><TD BGCOLOR=BLACK>||||</TD><TD>&nbsp;</TD><TD BGCOLOR=BLACK>||||</TD><TD>&nbsp;</TD><TD WIDTH=50%>193,759,886 rows (est.)</TD></TR></TABLE></TD></TR>     <TR VALIGN=TOP>      <TD COLSPAN=1 ALIGN=CENTER>       <TABLE BORDER CELLSPACING=0><TR><TD BGCOLOR=#CCFFFF class="nspnode"><A NAME=TREE201><A HREF=#201>#201</A> Order By</TD></TR></TABLE>      </TD>     </TR>     <TR><TD ALIGN=CENTER COLSPAN=1><TABLE class="qthw100" BORDER=0 CELLSPACING=0><TR><TD WIDTH=50%></TD><TD BGCOLOR=BLACK>||||</TD><TD>&nbsp;</TD><TD BGCOLOR=BLACK>||||</TD><TD>&nbsp;</TD><TD WIDTH=50%>193,759,886 rows (est.)</TD></TR></TABLE></TD></TR>     <TR VALIGN=TOP>      <TD COLSPAN=1 ALIGN=CENTER>       <TABLE BORDER CELLSPACING=0><TR><TD BGCOLOR=#FFCCFF class="nspnode"><A NAME=TREE02><A HREF=#02>#02</A> Leaf &lt;brlpb.temp_CVR_MONTH_B AS b&gt;</TD></TR></TABLE>      </TD>     </TR>    </TABLE>   </TD> </TR> </TABLE> <P ALIGN=LEFT><B>Query Text</B> <TABLE BORDER=1 ALIGN=CENTER CELLPADDING=2 CELLSPACING=0 WIDTH=100%><TR><TD><PRE> <FONT SIZE=-1>update &quot;cdwsa&quot;.&quot;IRDBM_F1095B_17&quot; as &quot;a&quot;   set &quot;a&quot;.&quot;DEP4_COV_IND_M1&quot; = &quot;b&quot;.&quot;COVERED_IND&quot; from   &quot;cdwsa&quot;.&quot;IRDBM_F1095B_17&quot; as &quot;a&quot;,&quot;temp_CVR_MONTH_B&quot; as &quot;b&quot;   where(&quot;a&quot;.&quot;INFO_RETURN_OTH_ENTITY_ID4&quot; = &quot;b&quot;.&quot;INFO_RETURN_OTH_ENTITY_ID&quot;)</FONT></PRE></TD></TR></TABLE><P> <P ALIGN=LEFT><B>Query Detail</B> <TABLE BORDER=0 ALIGN=CENTER CELLSPACING=2 CELLPADDING=2> <TR><TD> <TABLE BGCOLOR=#CCAACC BORDER=1 CELLSPACING=0> <TR><TH COLSPAN=2><A NAME=07><A HREF=#TREE07>#07 Root of an UPDATE</A></TH></TR> <TR><TD><B>Child Node 1</B></TD><TD><A HREF=#40>#40</A></TD></TR> <TR><TD><B>Estimated Result Rows</B></TD><TD>3,677,556,487,906</TD></TR> <TR><TD><B>User Name</B></TD><TD>brlpb   (SA connHandle: 12123  SA connID: 35)</TD></TR> <TR><TD><B>Est. Temp Space Used (Mb)</B></TD><TD>56140712.3</TD></TR> <TR><TD><B>Requested attributes</B></TD><TD>No Scroll Hold Chained </TD></TR> <TR><TD><B>Effective Number of Users</B></TD><TD>1</TD></TR> <TR><TD><B>Number of CPUs</B></TD><TD>32</TD></TR> <TR><TD><B>Executed on</B></TD><TD>SunOS/mtb1120plcdwstg/5.11/11.3/sun4v</TD></TR> <TR><TD><B>IQ Main Cache Size (Mb)</B></TD><TD>275000</TD></TR> <TR><TD><B>IQ Temp Cache Size (Mb)</B></TD><TD>250000</TD></TR> <TR><TD><B>IQ Large Memory Size (Mb)</B></TD><TD>275000</TD></TR> <TR><TD><B>Threads used for executing local invariant predicates</B></TD><TD>1</TD></TR> <TR><TD><B>Number of CPUs (actual)</B></TD><TD>256</TD></TR> <TR><TD><B>Option CREATE_HG_WITH_EXACT_DISTINCTS</B></TD><TD>OFF</TD></TR> <TR><TD><B>Option CORE_Options125</B></TD><TD>4096  (default: 0)</TD></TR> <TR><TD><B>Option Query_Plan_As_HTML</B></TD><TD>ON</TD></TR> <TR><TD><B>Option Max_Hash_Rows</B></TD><TD>2500000  (default: 30000000)</TD></TR> <TR><TD><B>Option Max_Temp_Space_Per_Connection</B></TD><TD>3000000  (default: 0)</TD></TR> <TR><TD><B>Option Infer_Subquery_Predicates</B></TD><TD>OFF</TD></TR> <TR><TD><B>Option Prefetch_Sort_Percent</B></TD><TD>50  (default: 20)</TD></TR> <TR><TD><B>Option Ase_Binary_Display</B></TD><TD>ON</TD></TR> <TR><TD><B>Option String_rtruncation</B></TD><TD>OFF</TD></TR> <TR><TD><B>Output Vector</B></TD><TD>2 entries (9 data bytes)</TD></TR> <TR><TD><B>Output 1</B></TD><TD>a._RowId</TD></TR> <TR><TD><B>Output 1     Data Type</B></TD><TD>unsigned bigint (20, 0)</TD></TR> <TR><TD><B>Output 1     Base Distincts</B></TD><TD>247,522,712</TD></TR> <TR><TD><B>Output 1     Note</B></TD><TD>Declared Primary Key</TD></TR> <TR><TD><B>Output 2</B></TD><TD>b.COVERED_IND</TD></TR> <TR><TD><B>Output 2     Data Type</B></TD><TD>varchar(1)</TD></TR> <TR><TD><B>Output 2     Base Distincts</B></TD><TD>3</TD></TR> </TABLE> </TD></TR> <TR><TD> <TABLE BGCOLOR=#AAFFFF BORDER=1 CELLSPACING=0> <TR><TH COLSPAN=2><A NAME=40><A HREF=#TREE40>#40 Parallel Combiner (ordered)</A></TH></TR> <TR><TD><B>Parent Node</B></TD><TD><A HREF=#07>#07</A></TD></TR> <TR><TD><B>Child Node 1</B></TD><TD><A HREF=#135>#135</A></TD></TR> <TR><TD><B>Estimated Result Rows</B></TD><TD>3,677,556,487,906</TD></TR> <TR><TD><B>Max. Possible Parallel Arms</B></TD><TD>32</TD></TR> <TR><TD><B>Optimization Note</B></TD><TD>Input Ordering Preserved</TD></TR> <TR><TD><B>Output Vector</B></TD><TD>2 entries (9 data bytes)</TD></TR> <TR><TD><B>Output 1</B></TD><TD>a._RowId</TD></TR> <TR><TD><B>Output 1     Data Type</B></TD><TD>unsigned bigint (20, 0)</TD></TR> <TR><TD><B>Output 1     Base Distincts</B></TD><TD>247,522,712</TD></TR> <TR><TD><B>Output 1     Note</B></TD><TD>Declared Primary Key</TD></TR> <TR><TD><B>Output 2</B></TD><TD>b.COVERED_IND</TD></TR> <TR><TD><B>Output 2     Data Type</B></TD><TD>varchar(1)</TD></TR> <TR><TD><B>Output 2     Base Distincts</B></TD><TD>3</TD></TR> </TABLE> </TD></TR> <TR><TD> <TABLE BGCOLOR=#CCFFFF BORDER=1 CELLSPACING=0> <TR><TH COLSPAN=2><A NAME=135><A HREF=#TREE135>#135 Order By</A></TH></TR> <TR><TD><B>Parent Node</B></TD><TD><A HREF=#40>#40</A></TD></TR> <TR><TD><B>Child Node 1</B></TD><TD><A HREF=#03>#03</A></TD></TR> <TR><TD><B>Estimated Result Rows</B></TD><TD>3,677,556,487,906</TD></TR> <TR><TD><B>Optimization Note</B></TD><TD>Parallel sort load</TD></TR> <TR><TD><B>Optimization Note</B></TD><TD>Parallel sort retrieval</TD></TR> <TR><TD><B>Max. Possible Parallel Arms</B></TD><TD>32</TD></TR> <TR><TD><B>Metadata Column Count</B></TD><TD>1</TD></TR> <TR><TD><B>Ordering Expression 1</B></TD><TD>a._RowId`(1)</TD></TR> <TR><TD><B>Output Vector</B></TD><TD>2 entries (9 data bytes)</TD></TR> <TR><TD><B>Output 1</B></TD><TD>a._RowId`(1)</TD></TR> <TR><TD><B>Output 2</B></TD><TD>b.COVERED_IND`(1)</TD></TR> </TABLE> </TD></TR>
Hello,  I have issues with integrating (claiming) SaaS AppDynamics instance in Cisco Intersight: the process fails with the following message: "Discovery failed: CRITICAL: com.vmturbo.mediation.app... See more...
Hello,  I have issues with integrating (claiming) SaaS AppDynamics instance in Cisco Intersight: the process fails with the following message: "Discovery failed: CRITICAL: com.vmturbo.mediation.appdynamics.exception.RefreshTokenCredentialException: Credentials are invalid or do not correspond to AppDynamics Controller API requirements." My instance in the Pro Trial mode now, could it be the reason of the error message? Thank you.
I'm trying send an e-mail from my Splunk Search Alert (I am using SPLUNK Enterprise), but I'm getting an error message "command="sendemail", 'rootCAPath' while sending mail to: MySuperCoolEmail101@gm... See more...
I'm trying send an e-mail from my Splunk Search Alert (I am using SPLUNK Enterprise), but I'm getting an error message "command="sendemail", 'rootCAPath' while sending mail to: MySuperCoolEmail101@gmail.com" and when I try a solution that involves going to Settings to Add a Role, I don't see that option listed at all.   My search query is something like (reference https://community.splunk.com/t5/Reporting/How-to-get-Splunk-sendemail-command-to-send-multiple-emails/m-p/126815)     index=rtm source="/mypath/app.log" SomeRandomTextHereForTesting | sendemail to="MySuperCoolEmail101@gmail.com" format=raw subject=myresults server=mail.splunk.com sendresults=true       I found this post which suggests adding some "list_settings" role by going to Settings > Access Controls > Roles, but I do not seem to have that option (reference https://community.splunk.com/t5/Reporting/splunk-dashboard-cant-sent-email/m-p/489388)   Am I doing something wrong? Is this because I'm on Splunk Enterprise? Am I using the correct mail server? How do I add ROLES?
Hello all, I am trying to setup the Microsoft 365 Defender Add-on for Splunk (https://splunkbase.splunk.com/app/4959/) to collect events from gcc.securitycenter.microsoft.us but I am not really seei... See more...
Hello all, I am trying to setup the Microsoft 365 Defender Add-on for Splunk (https://splunkbase.splunk.com/app/4959/) to collect events from gcc.securitycenter.microsoft.us but I am not really seeing an option to change the endpoint.  Can this be configured to hit https://api-gcc.securitycenter.microsoft.us?        
How do I search (any SPLs) for Dashboards that are not working (either built-in or created by users) or having errors while running in Splunk Core or ES? Thank u very much for any help.
I have a list of hundreds of string values that need to be extracted from a field the problem is the values that need to be extracted contain special characters i.e. (\, $, \\, ^, . . .) Is the... See more...
I have a list of hundreds of string values that need to be extracted from a field the problem is the values that need to be extracted contain special characters i.e. (\, $, \\, ^, . . .) Is there an easy way im missing to extract the literal characters? what I have tried  (example list) | makeresults | eval raw_field_example = "$money$ "?question?" "help$.." "random" "text" | makemv raw_field_example delim=" " | rex field=raw_field_example "(?<literal_items_I_want>"money"|$money$|"text"|"help$..")" results for literal_items_I_want are only: money  text tried putting the @ symbol before each double quotes but didnt work... thanks for the help
Hi Team, I have data with me as below.   2021-08-31 00:05:28|Test|Event|[c.f.d.aop.sql.database ] 2ms :testing8 2021-08-31 00:05:30|Test|Event|[c.f.d.aop.sql.database ] 1ms :testing1 2021-08-31 0... See more...
Hi Team, I have data with me as below.   2021-08-31 00:05:28|Test|Event|[c.f.d.aop.sql.database ] 2ms :testing8 2021-08-31 00:05:30|Test|Event|[c.f.d.aop.sql.database ] 1ms :testing1 2021-08-31 00:05:32|Test|Event|[c.f.d.aop.sql.database ] 12ms :testing3 2021-08-31 00:05:35|Test|Event|[c.f.d.aop.sql.database ] 20ms :testing5 2021-08-31 00:05:36|Test|Event|[c.f.d.aop.sql.database ] 102ms :testing9   I want to extract "ms" values from and based on these values want to create a timechart. Can anyone assist.
Hi my client has Splunk cloud and wants to know in which country he is storing the logs https://my_client_name.splunkcloud.com/ if I ping that URL that will resolve an IP and if I do a Whois I ca... See more...
Hi my client has Splunk cloud and wants to know in which country he is storing the logs https://my_client_name.splunkcloud.com/ if I ping that URL that will resolve an IP and if I do a Whois I can know from which country the IP is In that order of ideas, can I tell you that the logs are stored in that region or country?
How do I make sure the the ES KVstores are working & mapped properly to use them & avoid such errors? I appreciate some help please.