All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

At search time, you could use spath to navigate past the "event:" part of the log: <yoursearch> | spath path=event output=_raw However you may want a solution that does not require spath on every s... See more...
At search time, you could use spath to navigate past the "event:" part of the log: <yoursearch> | spath path=event output=_raw However you may want a solution that does not require spath on every search. You can configure your indexing tier to remove the "event:" part of the log, so that it directly shows you the useful fields without needing to be expanded. On the indexing tier, make a props.conf file in an app e.g. /opt/splunk/etc/apps/yourappname/local/props.conf Make this stanza: [cisco:amp:event] SEDCMD-RemoveEventKey = s/{"event":\s*// SEDCMD-RemoveLastBracket = s/}$//  (and more stanzas for other sourcetypes you would like this change to apply to) 
I have been working on decoding a base64 encoded command using the decrypt2 app. I have successfully decoded the string but facing difficulty excluding or searching and also running stats of decoded ... See more...
I have been working on decoding a base64 encoded command using the decrypt2 app. I have successfully decoded the string but facing difficulty excluding or searching and also running stats of decoded field which gives a "p" thing as a result. Examples of | Search NOT:   Example of Stats resulted "p": | rex field="process" ".*-(e|E)(n|N)[codemanCODEMAN]{0,12}\ (?<process_enc>[A-Za-z\d+/=]*)?" | decrypt field=process_enc b64 emit('process_decoded') | stats count by process_decoded Could someone please provide guidance on the correct syntax to exclude or search the decoded field using search not or using a lookup and help clarify the "P" thing from stats command? DECRYPT2 
We have installed and configured the MS Teams app Splunk>VictorOps version 1.1.0. It is successfully posting alerts to channels that are Public or Standard channels in a private team. However there i... See more...
We have installed and configured the MS Teams app Splunk>VictorOps version 1.1.0. It is successfully posting alerts to channels that are Public or Standard channels in a private team. However there is no option to select posting alerts to a shared channel. The channel just does not appear in the list. Is this a known limitation of the app? On the integration guide here https://help.victorops.com/knowledge-base/microsoft-teams-integration-guide/ it states that  "Note that installing into any channel in a team will make Splunk>VictorOps available for all channels in that team."   Has the app been set up to allow integration with Shared channels in MS Teams? References: https://learn.microsoft.com/en-us/microsoftteams/shared-channels https://learn.microsoft.com/en-us/microsoftteams/platform/concepts/build-and-test/shared-channels    
This worked for me! thanks.  You would assume splunk would hold your hand a bit more rather than making you find it but I'll take it.
It looks like you are doing everything correctly. Do you have any blocking elements in your environment like a proxy or firewall?  Also, do you find any internal logs that may contain more clues as... See more...
It looks like you are doing everything correctly. Do you have any blocking elements in your environment like a proxy or firewall?  Also, do you find any internal logs that may contain more clues as to why the authentication fails? e.g.  index=_internal authentication failed (and any events that occur around the events that explicitly say "authentication" or "failed")
Does this combined query produce the desired results? |mstats sum(transaction) as Total sum(success) as Success where index=metric-index transaction IN(transaction1, transaction2, transaction3) by s... See more...
Does this combined query produce the desired results? |mstats sum(transaction) as Total sum(success) as Success where index=metric-index transaction IN(transaction1, transaction2, transaction3) by service transaction |eval SuccessPerct=round(((Success/Total)*100),2) |xyseries service transaction Total Success SuccessPerct |table service "Success: transaction1" "SuccessPerct: transaction1" "SuccessPerct: transaction2" "Total: transaction2" "Success: transaction2" |join service [|mstats sum(error-count) as Error where index=metric-index by service errortype |append [|search index=app-index sourcetype=appl-logs (TERM(POST) OR TERM(GET) OR TERM(DELETE) OR TERM(PATCH)) OR errorNumber!=0 appls=et |lookup app-error.csv code as errorNumber output type as errortype |stats count as app.error count by appls errortype |rename appls as service error-count as Error] |xyseries service errortype Error |rename wvv as WVVErrors xxf as nonerrors] |addtotals "Success: transaction1" WVVErrors nonerrors fieldname="Total: transaction1" |eval sort_service=case(service="serv1",1,service="serv2",2,service="serv3",3,service="serv4",4,service="serv5",5,service="serv6",6,service="serv7",7,service="serv8",8,service="serv9",9,service="serv10",10) |sort + sort_service |table service "Success: transaction1" "SuccessPerct: transaction2" WVVErrors nonerrors |fillnull value=0 | append [|mstats sum(error-count) as Error where index=metric-index by service errorNumber errortype] | stats values(*) as * by service
Hi, Thank you for the prompt response. Trellis layout can be one of the options. However, for example, If I click on DB, then it should redirect to the separate visualization where we need to view t... See more...
Hi, Thank you for the prompt response. Trellis layout can be one of the options. However, for example, If I click on DB, then it should redirect to the separate visualization where we need to view the in-depth details only for DB and not for MEMBERDASHBOARD or TASKEDIT. Is it possible, please let me know.   
Was there ever a resolution to this error, I am seeing that the directory "unknown" is being created, not the actual microservice name.  Seems like a permissions issue, anyone thoughts?
I will preface by saying I am very new to using Splunk. We have recently did a rebuild of our environment and I noticed that one of our log sources does not return formatted logs the same way our oth... See more...
I will preface by saying I am very new to using Splunk. We have recently did a rebuild of our environment and I noticed that one of our log sources does not return formatted logs the same way our other log sources do. Whenever I try and do a query for AMP (Cisco Secure Endpoint) I have to click 'Show as raw text' to see any data which does not seem right to me.  I have been trying to extract fields using Rex as well and it just does not seem to be working and I'm not sure if it has something to do with how the logs are displaying when I do a query. Could someone point me in the right direction?
Thank you so much you saved me much time I thought it was an agent incompatibility issue with AWS machine but you helped solved the mystery seems to be a programming by default issue. 
I have a search for which I need to tune out a large number of values (about 25) in a proctitle command field.  Currently using: NOT proctitle IN ("*<proc1>*", "*<proc2>*", ......., "*<proc25>*") I... See more...
I have a search for which I need to tune out a large number of values (about 25) in a proctitle command field.  Currently using: NOT proctitle IN ("*<proc1>*", "*<proc2>*", ......., "*<proc25>*") I'm worried about performance on the search head and am looking for ways to lower the CPU and memory burden. I have two possible solutions: 1) Create a data model and place this search as a constraint. 2) Tag events on ingest with proctitle IN ("*<proc1>*", "*<proc2>*", ......., "*<proc25>*") and use this tag as a constraint in the data model. I've played with #1.  Is #2 possible, and is there a more efficient way to do this? Thanks in advance.
Would trellis layout work for you? Note that this is limited to 20 per page in Classic / SimpleXML dashboards, but Studio allows you to set the number per page.
I know this thread is a few years old, but I hope you are still active. Splunk is not pulling the OID off of smartcards to handle the full login itself. So, we set up Apache and I made the remoteUser... See more...
I know this thread is a few years old, but I hope you are still active. Splunk is not pulling the OID off of smartcards to handle the full login itself. So, we set up Apache and I made the remoteUser and RequestHeader configurations you described. When Splunk receives the header, nothing happens though. It logs an entry that ProxySSO is not configured. Have you seen this issue and know how to get past it to still use LDAP authentication in Splunk but passing the user name to from the Proxy via your described method?
I have a query that counts totals for each day for the past 7 days and produces these results: 2, 0, 2, 0, 0, 0, 0. No matter what I do, the SINGLE with timechart and trendlines enabled produced igno... See more...
I have a query that counts totals for each day for the past 7 days and produces these results: 2, 0, 2, 0, 0, 0, 0. No matter what I do, the SINGLE with timechart and trendlines enabled produced ignores the trailing zeros and displays a 2, with a trendling of increasing 2. It should diplay a zero with a zero trend line representing the last two segments (both zero). Before the main query (as recommended) I have used the | makeresults earliest"-7d@d" count =0 to ensure the days with zero count are included. I have tried the suggested appendpipe option: | appendpipe [| stats count | where count=0 | addinfo | eval _time=info_min_time | table _time count] and the appendpipe with max(count) option: | appendpipe [| stats count | where count=0 | addinfo | eval time=info_min_time." ".info_max_time | table time count | makemv time | mvexpand time | rename time as _time | timechart span=1d max(count) as count] Neither create the correct timechart. From the dashboard in the Edit UI mode, if I click on the query magnifying glass and open in a new tab, the results do NOT diplay the trailing zeros. If I copy and paste the query into a search bar with the time picker set to All Time, I get the correct values: 2, 0, 2, 0, 0, 0, 0. Is there an option setting I may have wrong? How do I fix this?
Hi @Gregory.Burkhead, Have you reported this one or the others to AppDynamic Support? How do I submit a Support ticket? An FAQ 
Hi, Here is the query and the results. Visualization panels should get created/deleted automatically depends on the rows under the Page column.   index="*" appID="*" environment=* tags="*" ... See more...
Hi, Here is the query and the results. Visualization panels should get created/deleted automatically depends on the rows under the Page column.   index="*" appID="*" environment=* tags="*" stepName="*" status=FAILED | rex field=stepName "^(?<Page>[^\:]+)" | rex field=stepName "^\'(?<Page>[^\'\:]+)" | rex field=stepName "\:(?P<action>.*)" | eval Page=lower(Page) | stats count(scenario) as "Number of Scenarios" by Page | table Page, "Number of Scenarios"   I created the created single value visualization panel manually based on the rows, but if the rows are decreased dynamically, I could see N/A in most of the visualization panels. So, auto-scaling of visualization panel is needed in this scenario.    
I'm seeing this error from the _internal index in the web_service.log and the python.log: "startup:116 - Unable to read in product version information; isSessionKeyDefined=True error=[HTTP 401] Cl... See more...
I'm seeing this error from the _internal index in the web_service.log and the python.log: "startup:116 - Unable to read in product version information; isSessionKeyDefined=True error=[HTTP 401] Client is not authenticated" Does anyone have more information on this error?
Hi I would like to have the citrix cloud add-on to be installed into the Splunk cloud,how can I achieve this?
@Mohd_Harahsheh9 Please find below the Tenable and Splunk integration documents.  Tenable and Splunk Integration Guide  Troubleshooting (tenable.com) Tenable Data in Splunk Dashboard  --- If thi... See more...
@Mohd_Harahsheh9 Please find below the Tenable and Splunk integration documents.  Tenable and Splunk Integration Guide  Troubleshooting (tenable.com) Tenable Data in Splunk Dashboard  --- If this reply helps you, Karma would be appreciated.
@SCruz Follow the below document and go through your requirement to install the add-on or app in Splunk Cloud.  Install an add-on in Splunk Cloud Platform - Splunk Documentation  --- If this reply... See more...
@SCruz Follow the below document and go through your requirement to install the add-on or app in Splunk Cloud.  Install an add-on in Splunk Cloud Platform - Splunk Documentation  --- If this reply helps you, Karma would be appreciated.