All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I have setup the Object and event input configuration in the salesforce TA, I am able to see the object logs but unable to see the event logs in splunk cloud.   Any directions of triaging the ... See more...
Hi, I have setup the Object and event input configuration in the salesforce TA, I am able to see the object logs but unable to see the event logs in splunk cloud.   Any directions of triaging the issue? Appropriate permissions are provided for the salesforce user.
You might be wanting to configure Splunk to start at boot time. /opt/splunk/bin/splunk enable boot-start ref: https://docs.splunk.com/Documentation/Splunk/latest/Admin/ConfigureSplunktostartatboott... See more...
You might be wanting to configure Splunk to start at boot time. /opt/splunk/bin/splunk enable boot-start ref: https://docs.splunk.com/Documentation/Splunk/latest/Admin/ConfigureSplunktostartatboottime
Thanks.  I hadn't thought of that.  Since I posted the question, NetSkope came back with a solution.  I was sent this   conf_file_stanzas = conf_file_object.get_all()   replace the above line with ... See more...
Thanks.  I hadn't thought of that.  Since I posted the question, NetSkope came back with a solution.  I was sent this   conf_file_stanzas = conf_file_object.get_all()   replace the above line with following:   conf_file_stanzas = conf_file_object.get_all(only_current_app=True)  With that the issue was resolved.  The code was trying to get information from another TA.
Assuming you are on a linux machine, you could try piping the session_key value to that first command. echo "sessionkeyhere" | splunk cmd python -m pdb netskope_email_notification.py (Note that if ... See more...
Assuming you are on a linux machine, you could try piping the session_key value to that first command. echo "sessionkeyhere" | splunk cmd python -m pdb netskope_email_notification.py (Note that if you enter the key in explicitly, it may be saved in your command history which may be undesired. You can also read the key from a source using the "cat" command) I couldn't tell you about the session_key... perhaps the Netskope docs could tell you where to get one. It could be a session with an email provider if this python script is intended to send email.
I don't fully understand what you mean... would it be possible to include screenshots demonstrating the timechart you would like (using the All-time search) versus what you get? Also a full query (wi... See more...
I don't fully understand what you mean... would it be possible to include screenshots demonstrating the timechart you would like (using the All-time search) versus what you get? Also a full query (without private information) would be very helpful. E.g. something like this?  
Indeed, "latest" should be on 9.2.1, but it seems to be on 9.0.9 . Perhaps we can ping the resolver of that post @amayor_splunk and humbly ask for assistance.
At search time, you could use spath to navigate past the "event:" part of the log: <yoursearch> | spath path=event output=_raw However you may want a solution that does not require spath on every s... See more...
At search time, you could use spath to navigate past the "event:" part of the log: <yoursearch> | spath path=event output=_raw However you may want a solution that does not require spath on every search. You can configure your indexing tier to remove the "event:" part of the log, so that it directly shows you the useful fields without needing to be expanded. On the indexing tier, make a props.conf file in an app e.g. /opt/splunk/etc/apps/yourappname/local/props.conf Make this stanza: [cisco:amp:event] SEDCMD-RemoveEventKey = s/{"event":\s*// SEDCMD-RemoveLastBracket = s/}$//  (and more stanzas for other sourcetypes you would like this change to apply to) 
I have been working on decoding a base64 encoded command using the decrypt2 app. I have successfully decoded the string but facing difficulty excluding or searching and also running stats of decoded ... See more...
I have been working on decoding a base64 encoded command using the decrypt2 app. I have successfully decoded the string but facing difficulty excluding or searching and also running stats of decoded field which gives a "p" thing as a result. Examples of | Search NOT:   Example of Stats resulted "p": | rex field="process" ".*-(e|E)(n|N)[codemanCODEMAN]{0,12}\ (?<process_enc>[A-Za-z\d+/=]*)?" | decrypt field=process_enc b64 emit('process_decoded') | stats count by process_decoded Could someone please provide guidance on the correct syntax to exclude or search the decoded field using search not or using a lookup and help clarify the "P" thing from stats command? DECRYPT2 
We have installed and configured the MS Teams app Splunk>VictorOps version 1.1.0. It is successfully posting alerts to channels that are Public or Standard channels in a private team. However there i... See more...
We have installed and configured the MS Teams app Splunk>VictorOps version 1.1.0. It is successfully posting alerts to channels that are Public or Standard channels in a private team. However there is no option to select posting alerts to a shared channel. The channel just does not appear in the list. Is this a known limitation of the app? On the integration guide here https://help.victorops.com/knowledge-base/microsoft-teams-integration-guide/ it states that  "Note that installing into any channel in a team will make Splunk>VictorOps available for all channels in that team."   Has the app been set up to allow integration with Shared channels in MS Teams? References: https://learn.microsoft.com/en-us/microsoftteams/shared-channels https://learn.microsoft.com/en-us/microsoftteams/platform/concepts/build-and-test/shared-channels    
This worked for me! thanks.  You would assume splunk would hold your hand a bit more rather than making you find it but I'll take it.
It looks like you are doing everything correctly. Do you have any blocking elements in your environment like a proxy or firewall?  Also, do you find any internal logs that may contain more clues as... See more...
It looks like you are doing everything correctly. Do you have any blocking elements in your environment like a proxy or firewall?  Also, do you find any internal logs that may contain more clues as to why the authentication fails? e.g.  index=_internal authentication failed (and any events that occur around the events that explicitly say "authentication" or "failed")
Does this combined query produce the desired results? |mstats sum(transaction) as Total sum(success) as Success where index=metric-index transaction IN(transaction1, transaction2, transaction3) by s... See more...
Does this combined query produce the desired results? |mstats sum(transaction) as Total sum(success) as Success where index=metric-index transaction IN(transaction1, transaction2, transaction3) by service transaction |eval SuccessPerct=round(((Success/Total)*100),2) |xyseries service transaction Total Success SuccessPerct |table service "Success: transaction1" "SuccessPerct: transaction1" "SuccessPerct: transaction2" "Total: transaction2" "Success: transaction2" |join service [|mstats sum(error-count) as Error where index=metric-index by service errortype |append [|search index=app-index sourcetype=appl-logs (TERM(POST) OR TERM(GET) OR TERM(DELETE) OR TERM(PATCH)) OR errorNumber!=0 appls=et |lookup app-error.csv code as errorNumber output type as errortype |stats count as app.error count by appls errortype |rename appls as service error-count as Error] |xyseries service errortype Error |rename wvv as WVVErrors xxf as nonerrors] |addtotals "Success: transaction1" WVVErrors nonerrors fieldname="Total: transaction1" |eval sort_service=case(service="serv1",1,service="serv2",2,service="serv3",3,service="serv4",4,service="serv5",5,service="serv6",6,service="serv7",7,service="serv8",8,service="serv9",9,service="serv10",10) |sort + sort_service |table service "Success: transaction1" "SuccessPerct: transaction2" WVVErrors nonerrors |fillnull value=0 | append [|mstats sum(error-count) as Error where index=metric-index by service errorNumber errortype] | stats values(*) as * by service
Hi, Thank you for the prompt response. Trellis layout can be one of the options. However, for example, If I click on DB, then it should redirect to the separate visualization where we need to view t... See more...
Hi, Thank you for the prompt response. Trellis layout can be one of the options. However, for example, If I click on DB, then it should redirect to the separate visualization where we need to view the in-depth details only for DB and not for MEMBERDASHBOARD or TASKEDIT. Is it possible, please let me know.   
Was there ever a resolution to this error, I am seeing that the directory "unknown" is being created, not the actual microservice name.  Seems like a permissions issue, anyone thoughts?
I will preface by saying I am very new to using Splunk. We have recently did a rebuild of our environment and I noticed that one of our log sources does not return formatted logs the same way our oth... See more...
I will preface by saying I am very new to using Splunk. We have recently did a rebuild of our environment and I noticed that one of our log sources does not return formatted logs the same way our other log sources do. Whenever I try and do a query for AMP (Cisco Secure Endpoint) I have to click 'Show as raw text' to see any data which does not seem right to me.  I have been trying to extract fields using Rex as well and it just does not seem to be working and I'm not sure if it has something to do with how the logs are displaying when I do a query. Could someone point me in the right direction?
Thank you so much you saved me much time I thought it was an agent incompatibility issue with AWS machine but you helped solved the mystery seems to be a programming by default issue. 
I have a search for which I need to tune out a large number of values (about 25) in a proctitle command field.  Currently using: NOT proctitle IN ("*<proc1>*", "*<proc2>*", ......., "*<proc25>*") I... See more...
I have a search for which I need to tune out a large number of values (about 25) in a proctitle command field.  Currently using: NOT proctitle IN ("*<proc1>*", "*<proc2>*", ......., "*<proc25>*") I'm worried about performance on the search head and am looking for ways to lower the CPU and memory burden. I have two possible solutions: 1) Create a data model and place this search as a constraint. 2) Tag events on ingest with proctitle IN ("*<proc1>*", "*<proc2>*", ......., "*<proc25>*") and use this tag as a constraint in the data model. I've played with #1.  Is #2 possible, and is there a more efficient way to do this? Thanks in advance.
Would trellis layout work for you? Note that this is limited to 20 per page in Classic / SimpleXML dashboards, but Studio allows you to set the number per page.
I know this thread is a few years old, but I hope you are still active. Splunk is not pulling the OID off of smartcards to handle the full login itself. So, we set up Apache and I made the remoteUser... See more...
I know this thread is a few years old, but I hope you are still active. Splunk is not pulling the OID off of smartcards to handle the full login itself. So, we set up Apache and I made the remoteUser and RequestHeader configurations you described. When Splunk receives the header, nothing happens though. It logs an entry that ProxySSO is not configured. Have you seen this issue and know how to get past it to still use LDAP authentication in Splunk but passing the user name to from the Proxy via your described method?
I have a query that counts totals for each day for the past 7 days and produces these results: 2, 0, 2, 0, 0, 0, 0. No matter what I do, the SINGLE with timechart and trendlines enabled produced igno... See more...
I have a query that counts totals for each day for the past 7 days and produces these results: 2, 0, 2, 0, 0, 0, 0. No matter what I do, the SINGLE with timechart and trendlines enabled produced ignores the trailing zeros and displays a 2, with a trendling of increasing 2. It should diplay a zero with a zero trend line representing the last two segments (both zero). Before the main query (as recommended) I have used the | makeresults earliest"-7d@d" count =0 to ensure the days with zero count are included. I have tried the suggested appendpipe option: | appendpipe [| stats count | where count=0 | addinfo | eval _time=info_min_time | table _time count] and the appendpipe with max(count) option: | appendpipe [| stats count | where count=0 | addinfo | eval time=info_min_time." ".info_max_time | table time count | makemv time | mvexpand time | rename time as _time | timechart span=1d max(count) as count] Neither create the correct timechart. From the dashboard in the Edit UI mode, if I click on the query magnifying glass and open in a new tab, the results do NOT diplay the trailing zeros. If I copy and paste the query into a search bar with the time picker set to All Time, I get the correct values: 2, 0, 2, 0, 0, 0, 0. Is there an option setting I may have wrong? How do I fix this?