All Topics

Top

All Topics

However we are trying to achive ping status in a dashboard .And also pingcommand app is not supported in splunk and we are not to use apps like network toolkit. How to ping server without using any a... See more...
However we are trying to achive ping status in a dashboard .And also pingcommand app is not supported in splunk and we are not to use apps like network toolkit. How to ping server without using any apps.    
Hi User,   Thanks for the reply. Below is the raw text that has been received on splunk user interface.    {"timestamp": "2023-01-24T08:06:29.621490Z", "level": "INFO", "filename": "splunk_sample... See more...
Hi User,   Thanks for the reply. Below is the raw text that has been received on splunk user interface.    {"timestamp": "2023-01-24T08:06:29.621490Z", "level": "INFO", "filename": "splunk_sample_csv.py", "funcName": "main", "lineno": 38, "message": "Dataframe row : {\"_c0\":{\"0\":null,\"1\":\"266\",\"2\":\"267\",\"3\":\"268\"},\"_c1\":{\"0\":\"Timestamp\",\"1\":\"2023\\/01\\/10 13:31:19\",\"2\":\"2023\\/01\\/10 13:31:19\",\"3\":\"2023\\/01\\/10 13:31:19\"},\"_c2\":{\"0\":\"application\",\"1\":\"DWHEAP\",\"2\":\"DWHEAP\",\"3\":\"DWHEAP\"},\"_c3\":{\"0\":\"ctm\",\"1\":\"LNDEV02\",\"2\":\"LNDEV02\",\"3\":\"LNDEV02\"},\"_c4\":{\"0\":\"cyclic\",\"1\":\"False\",\"2\":\"False\",\"3\":\"False\"},\"_c5\":{\"0\":\"deleted\",\"1\":\"False\",\"2\":\"False\",\"3\":\"False\"},\"_c6\":{\"0\":\"description\",\"1\":\"Job to populate data to RDV for SK SOURCE SALES_EVENT\",\"2\":\"Job to populate data to RDV for SK SOURCE SALES_HIERARCHY\",\"3\":\"Job to populate data to RDV for SK SOURCE SALES_EVENT\"},\"_c7\":{\"0\":\"endTime\",\"1\":null,\"2\":null,\"3\":null},\"_c8\":{\"0\":\"estimatedEndTime\",\"1\":\"[u'20230110144400']\",\"2\":\"[u'20230110123200']\",\"3\":\"[u'20230110123200']\"},\"_c9\":{\"0\":\"estimatedStartTime\",\"1\":\"[u'20230110122700']\",\"2\":\"[u'20230110122700']\",\"3\":\"[u'20230110122700']\"},\"_c10\":{\"0\":\"folder\",\"1\":\"DWHEAP_RDV_SKBACKEND\",\"2\":\"DWHEAP_RDV_SKBACKEND\",\"3\":\"DWHEAP_RDV_SKBACKEND_TEST\"},\"_c11\":{\"0\":\"folderId\",\"1\":\"LNDEV02:\",\"2\":\"LNDEV02:\",\"3\":\"LNDEV02:\"},\"_c12\":{\"0\":\"held\",\"1\":\"False\",\"2\":\"False\",\"3\":\"False\"},\"_c13\":{\"0\":\"host\",\"1\":\"fraasdwhbdd1.de.db.com\",\"2\":\"fraasdwhbdd1.de.db.com\",\"3\":\"fraasdwhbdd1.de.db.com\"},\"_c14\":{\"0\":\"jobId\",\"1\":\"LNDEV02:5jtzl\",\"2\":\"LNDEV02:5jtzi\",\"3\":\"LNDEV02:5jtho\"},\"_c15\":{\"0\":\"logURI\",\"1\":\"https:\\/\\/lnemd.uk.db.com:8443\\/automation-api\\/run\\/job\\/LNDEV02:5jtzl\\/log\",\"2\":\"https:\\/\\/lnemd.uk.db.com:8443\\/automation-api\\/run\\/job\\/LNDEV02:5jtzi\\/log\",\"3\":\"https:\\/\\/lnemd.uk.db.com:8443\\/automation-api\\/run\\/job\\/LNDEV02:5jtho\\/log\"},\"_c16\":{\"0\":\"name\",\"1\":\"SALES_EVENT_RDV\",\"2\":\"SALES_HIERARCHY_RDV\",\"3\":\"SALES_EVENT_RDV\"},\"_c17\":{\"0\":\"numberOfRuns\",\"1\":\"0\",\"2\":\"0\",\"3\":\"0\"},\"_c18\":{\"0\":\"orderDate\",\"1\":\"230106\",\"2\":\"230106\",\"3\":\"230106\"},\"_c19\":{\"0\":\"outputURI\",\"1\":\"Job did not run, it has no output\",\"2\":\"Job did not run, it has no output\",\"3\":\"Job did not run, it has no output\"},\"_c20\":{\"0\":\"startTime\",\"1\":null,\"2\":null,\"3\":null},\"_c21\":{\"0\":\"status\",\"1\":\"Wait Condition\",\"2\":\"Wait Condition\",\"3\":\"Wait Condition\"},\"_c22\":{\"0\":\"subApplication\",\"1\":\"RDV_SKBACKEND\",\"2\":\"RDV_SKBACKEND\",\"3\":\"RDV_SKBACKEND_TEST\"},\"_c23\":{\"0\":\"type\",\"1\":\"Command\",\"2\":\"Command\",\"3\":\"Command\"}} ", "process": 2819, "processName": "MainProcess"}   In the above raw text there are jobId's  \"_c14\":{\"0\":\"jobId\",\"1\":\"LNDEV02:5jtzl\",\"2\":\"LNDEV02:5jtzi\",\"3\":\"LNDEV02:5jtho\"} We need to extract those jobids from the raw text and add them as a seperate field in the events using SPL in the user interface. Please help me on this.
Hi there,  We are trying to collect data from Azure Monitor REST api using Rest API modular Input. The config looks like below:  The problem we are having is the refresh token doesn't work a... See more...
Hi there,  We are trying to collect data from Azure Monitor REST api using Rest API modular Input. The config looks like below:  The problem we are having is the refresh token doesn't work as we expected. Below is the internal log generated by the rest api app, there is no log coming from Azure Monitor API. We can see the attempt trying to execute HTTP request but nothing happened.  Any tips would be appreciated, thank you. 
I have a metric index with a hierarchical structure (maybe all metric indexes are like this).   SuperCategory.Category1.metric1                                                   .metric2          ... See more...
I have a metric index with a hierarchical structure (maybe all metric indexes are like this).   SuperCategory.Category1.metric1                                                   .metric2                                                   .metric3 SuperCategory.Category2.metric1                                                       .metric2 There is a many to one relationship between categories.  I've tried many different combination methods but my starting point was:       | mstats avg(Vmax.StorageGroup.HostIOs) as IOPs avg(Vmax.StorageGroup.AllocatedCapacity) as SgCapacity avg(Vmax.Array.UsableCapacity) as ArrayCap avg(Vmax.Array.HostIOs) as ArrayIOPS WHERE index=storage_metrics by Array_Name, Loc, sgname span=1d | eval SgIOPs = round(IOPs, 2), SgCapacity = round(SgCapacity, 2), SgCapPct=round((SgCapacity/ArrayCap)*100, 2), SgIOPct=round((IOPs/ArrayIOPS)*100, 2) | table sgname Array_Name Loc SgIOPs ArrayIOPS SgIOPct SgCapacity ArrayCap SgCapPct _time       Nothing is returned for any of the Vmax.Array metrics.  There are many 'sgname' to any single 'Array_Name'.  As you can probably tell I'm trying to calculate what % of an array total an sgname is using.  I find myself in this situation quite often and don't really know how to handle it.   I appreciate any help anyone can offer.     
Hi, How can I make this search to display the peak by day index=* sourcetype=Perfmon:Memory host=* |timechart span=7d | stats sparkline(avg(windows_mem_free)) as Trend avg(windows_mem_free) as Av... See more...
Hi, How can I make this search to display the peak by day index=* sourcetype=Perfmon:Memory host=* |timechart span=7d | stats sparkline(avg(windows_mem_free)) as Trend avg(windows_mem_free) as Average, max(windows_mem_free) as Peak , latest(windows_mem_free) as Current, latest(_time) as "Last Updated" by host | convert ctime("Last Updated") | eval Peak=round((Peak)/1000,2) | eval Current=round((Current)/1000,2) | eval Average=round((Average)/1000,2)   Thank you,  
Have been using the Splunk ODBC driver v3.11 (64-bit) to pull time-series event data from a saved search (report) into Excel for manipulation.  Has been working perfectly for over a month but today E... See more...
Have been using the Splunk ODBC driver v3.11 (64-bit) to pull time-series event data from a saved search (report) into Excel for manipulation.  Has been working perfectly for over a month but today Excel has suddenly began complaining about missing columns when executing a refresh: "[Expression.Error] The column 'XXXX' of the table wasn't found." Sure enough, the preview in Power Query shows those columns missing.  Sometimes I can refresh and they'll reappear, but mostly they don't.  The search on the Splunk side runs fine and includes all the columns Excel complains about. I've been troubleshooting by time bounding the search to see if some bad data has crept it but am not finding any pattern yet. Is there any way to enable debug logging for the ODBC driver?  Nothing obvious jumps out at me in the Windows ODBC configuration GUI. Open to other ideas as well - may have to switch back to a two step CSV export and subsequent load into Excel - less than ideal. Thanks in advance!
Still working on this.  I want to create a single pane dashboard panel with trend indicator.  This value is going to display: The current count of events over the last 3 hours and the trendline... See more...
Still working on this.  I want to create a single pane dashboard panel with trend indicator.  This value is going to display: The current count of events over the last 3 hours and the trendline will display the deviation from the average count over the same 3 hour timespan over the last week.
As a newcomer to Splunk, I am currently seeking to gain a deeper understanding of Splunk apps and their associated benefits. While I am familiar with the process of packaging and deploying an app, I ... See more...
As a newcomer to Splunk, I am currently seeking to gain a deeper understanding of Splunk apps and their associated benefits. While I am familiar with the process of packaging and deploying an app, I remain uncertain regarding one particular aspect: whether it is possible to bundle configuration related to the search head and apply it to the entire search head, as opposed to only a specific app? My difficulty in understanding the specifics of this process has led me to question whether, upon deploying the packaged configuration, it will indeed only be applied to that specific app and not to the wider Splunk environment. I would greatly appreciate it if you could point me towards any relevant documents or resources too.
convert 2023-03-15T17:25:18.832-0400 to YYYY-MM-DD HH:MM:SS.Millisec . 2023-03-15T17:25:18.832-0400 ------------------- > 2023-03-15 17:25:18.832 Once converted to the asked format i need to calc... See more...
convert 2023-03-15T17:25:18.832-0400 to YYYY-MM-DD HH:MM:SS.Millisec . 2023-03-15T17:25:18.832-0400 ------------------- > 2023-03-15 17:25:18.832 Once converted to the asked format i need to calculate the difference between EndTime & StartTIme.
I am a Splunk newbie needing help.  I had successfully setup my Windows PCs to log printing events, specifically event 307.  I setup a Splunk report that would list any activity with this event id.  ... See more...
I am a Splunk newbie needing help.  I had successfully setup my Windows PCs to log printing events, specifically event 307.  I setup a Splunk report that would list any activity with this event id.  I noticed that one PC that shared a printer stopped reporting.  I can see on the PC's event log that it is logging print jobs but are not showing up in the Splunk index that collects the logs.  Other events from that PC are being logged in the index.  I ran some tests on other PCs, doing print jobs "to PDF".  It shows up in the index on one other PC but does not for other PCs.  I am using the Splunk agent on all my PCs.  Is there a way to track an event being logged on a PC and making its way through the splunk agent onto the splunk server and the index?  Also, the historic print events that used to show up in the Index no longer do when I choose "All Time" as the time reference.
I have two different queries that return the absolute same result:      value | chart count(status) by request_method, http_cloudid     and      value | chart count by request_metho... See more...
I have two different queries that return the absolute same result:      value | chart count(status) by request_method, http_cloudid     and      value | chart count by request_method, http_cloudid       Does it mean that the field specified in the `count` function actually does nothing using `chart`? 
Hi, I have a new Node app and am trying to install the AppDynamics package. This fails with a 403 when downloading some files. This is from a Windows 11 machine using Node 18.15.0. We also have Mac... See more...
Hi, I have a new Node app and am trying to install the AppDynamics package. This fails with a 403 when downloading some files. This is from a Windows 11 machine using Node 18.15.0. We also have Mac M1 developers who also need to be able to install this package. Seeing as this is a 403 is there some kind of authentication needed? Or is the file just not available over the VPN. If I paste the download URL into a browser I also get a 403. Thanks installing packageappdynamics-libagent-napi-native v18.15.0 Testing: BaseUrl set is:https://cdn.appdynamics.com/packages/nodejs/ Agent version 23.3.0 Get agent version result is:23.3.0.0 Agent version 23.3.0 baseUrl is:https://cdn.appdynamics.com/packages/nodejs/ DownloadURL is:https://cdn.appdynamics.com/packages/nodejs/23.3.0.0/appdynamics-libagent-napi-native-win32-x64-v18.tgz https://cdn.appdynamics.com/packages/nodejs/23.3.0.0/appdynamics-libagent-napi-native-win32-x64-v18.tgz Agent version 23.3.0 Downloadinghttps://cdn.appdynamics.com/packages/nodejs/23.3.0.0/appdynamics-libagent-napi-native-win32-x64-v18.tgz Saving toC:\Users\alanm\AppData\Local\Temp\appdynamics-23.3.0.0\appdynamics-libagent-napi-native-win32-x64-v18.tgz user agent: yarn/1.22.19 npm/? node/v18.15.0 win32 x64 Receiving... https://cdn.appdynamics.com/packages/nodejs/23.3.0.0/appdynamics-libagent-napi-native-win32-x64-v18.tgz Error requesting archive. Status: 403 Request options: { "uri": "https://cdn.appdynamics.com/packages/nodejs/23.3.0.0/appdynamics-libagent-napi-native-win32-x64-v18.tgz", "followRedirect": true, "headers": { "User-Agent": "yarn/1.22.19 npm/? node/v18.15.0 win32 x64" }, "gzip": true, "encoding": null, "strictSSL": true, "hostname": "cdn.appdynamics.com", "path": "/packages/nodejs/23.3.0.0/appdynamics-libagent-napi-native-win32-x64-v18.tgz" } Response headers: { "content-type": "application/xml", "transfer-encoding": "chunked", "connection": "close", "date": "Wed, 15 Mar 2023 21:13:36 GMT", "server": "nginx/1.16.1", "x-cache": "Error from cloudfront", "via": "1.1 9910b161083ec8200ad24e6d6beec168.cloudfront.net (CloudFront)", "x-amz-cf-pop": "SYD1-C1",
Hello All - I'm fairly new to Splunk and I've been racking my head for the past 8 hours trying to create a table for comparing two sets of runs. My search string is as follows: index="fe" source="... See more...
Hello All - I'm fairly new to Splunk and I've been racking my head for the past 8 hours trying to create a table for comparing two sets of runs. My search string is as follows: index="fe" source="regress_rpt" pipeline="soc" version IN ("(version="23ww09a" OR version="23ww10b")") dut="*" cthModel="*" (testlist="*") (testName="*") status=* | rex field=testPath "/regression/(?<regressionPath>.*)" | search regressionPath="*" | search regressionPath="***"| stats values(cyclesPerCpuSec) values(version) by testPath | reverse This will generate a table similar to:     testPath values(cyclesPerCpuSec) vales(version) test.6 719.91 23ww10b test.5 742.44 23ww10b test.4 563.46 23ww10b test.6 694.36 23ww10a test.5 423.23 23ww10a test.4 146.34 23ww10a   However, I want to display a table that looks like   testPath 23ww10a 23ww10b test.6 694.36 719.91 test.5 423.23 742.44 test.4 146.34 563.46   Essentially, I'm trying to create table so I can compare the results of a specific test across different model versions. Any help is greatly appreciated! Thanks, Phil
Query: index=xxx  application_code=mobile  NOT   feature  |stats count by code message |sort -count |eval message-substr(message, 1, 40) output: code message count mobile-job... See more...
Query: index=xxx  application_code=mobile  NOT   feature  |stats count by code message |sort -count |eval message-substr(message, 1, 40) output: code message count mobile-job-115 application error occured 100 mobile-app-180 application is stable 240 app-job-800 information good 34 project-job-100 system error occured 10 project-job-100    system error occured 20 project-job-100    system error occured 34 project-job-100    system error occured 23 project-job-100    system error occured 50  expected output: code message count mobile-job-115 application error occured 100 mobile-app-180 application is stable 240 app-job-800 information good 34 project-job-100 system error occured 137  i want to get my table display count as one value for similar messages like for example(system error occured) as shown above.
  Struggling with alert fatigue, lack of context, and prioritization around security incidents? With Splunk Enterprise Security 7.1, we made it even easier to analyze malicious activities and ... See more...
  Struggling with alert fatigue, lack of context, and prioritization around security incidents? With Splunk Enterprise Security 7.1, we made it even easier to analyze malicious activities and determine the scope of incidents faster. Splunk Enterprise Security 7.1 new visualization features include Threat Topology, which determines the scope of security incidents, and MITRE ATT&CK Framework Visualization, which highlights the tactics and techniques observed in risk events so that you can respond faster. Highlights: Quickly discover the scope of an incident to respond with accuracy Improve security workflow efficiencies with embedded frameworks Operationalize the MITRE ATT&CK framework when responding to Notable Events Identify additional impacted subjects of an investigation without writing a single line of code of query language
I've seen similar posts but most are without an answer or the answer doesn't apply to me. I'm sending a valid blob of JSON to HEC, and am seeing this error in the log: ERROR JsonLineBreaker [2809 p... See more...
I've seen similar posts but most are without an answer or the answer doesn't apply to me. I'm sending a valid blob of JSON to HEC, and am seeing this error in the log: ERROR JsonLineBreaker [2809 parsing] - JSON StreamId:0 had parsing error:Unexpected character while looking for value: 'm' - data_source="http:***", data_host="compy-manjaro", data_sourcetype=" _json"   Here is my HEC token's config:   Here is the config of the related index (type is "metrics"):   Here's an example payload:     { "event": "metric", "time": 1678911825, "host": "compy-manjaro", "fields": { "app.name": "my-app", "app.version": "v0.0.1 (unknown@unknown)", "health:db": 0, "health:diskSpace": 0, "health:mail": 0, "health:ping": 0, "application.ready.time:value": 15603.0, "application.started.time:value": 15593.0, "disk.free:value": 2.210336768E10, "disk.total:value": 2.4284653568E11, "executor.active:value": 0.0, "executor.completed:count": 0.0, "executor.pool.core:value": 0.0, "executor.pool.max:value": 2.147483647E9, "executor.pool.size:value": 0.0, "executor.queue.remaining:value": 2.147483647E9, "executor.queued:value": 0.0, "hikaricp.connections.acquire:count": 12.0, "hikaricp.connections.acquire:max": 0.0, "hikaricp.connections.acquire:total": 8.146637, "hikaricp.connections.active:value": 0.0, "hikaricp.connections.creation:count": 0.0, "hikaricp.connections.creation:max": 0.0, "hikaricp.connections.creation:total": 0.0, "hikaricp.connections.idle:value": 11.0, "hikaricp.connections.max:value": 40.0, "hikaricp.connections.min:value": 10.0, "hikaricp.connections.pending:value": 0.0, "hikaricp.connections.timeout:count": 0.0, "hikaricp.connections.usage:count": 12.0, "hikaricp.connections.usage:max": 0.0, "hikaricp.connections.usage:total": 59.0, "hikaricp.connections:value": 11.0, "jdbc.connections.active:value": 0.0, "jdbc.connections.idle:value": 11.0, "jdbc.connections.max:value": 40.0, "jdbc.connections.min:value": 10.0, "jvm.buffer.count:value": 17.0, "jvm.buffer.memory.used:value": 0.0, "jvm.buffer.total.capacity:value": 0.0, "jvm.classes.loaded:value": 22964.0, "jvm.classes.unloaded:count": 6.0, "jvm.gc.live.data.size:value": 0.0, "jvm.gc.max.data.size:value": 8.405385216E9, "jvm.gc.memory.allocated:count": 1.023410176E9, "jvm.gc.memory.promoted:count": 1.22555392E8, "jvm.gc.overhead:value": 0.005311596570632951, "jvm.gc.pause:count": 9.0, "jvm.gc.pause:max": 0.0, "jvm.gc.pause:total": 175.0, "jvm.memory.committed:value": 1.6449536E7, "jvm.memory.max:value": -1.0, "jvm.memory.usage.after.gc:value": 0.01895299976219436, "jvm.memory.used:value": 1.59307264E8, "jvm.threads.daemon:value": 45.0, "jvm.threads.live:value": 68.0, "jvm.threads.peak:value": 69.0, "jvm.threads.states:value": 0.0, "logback.events:count": 0.0, "process.cpu.usage:value": 0.007488087134104833, "process.files.max:value": 524288.0, "process.files.open:value": 373.0, "process.start.time:value": 1.678911778094E12, "process.uptime:value": 47711.0, "system.cpu.count:value": 8.0, "system.cpu.usage:value": 0.1834410064603876, "system.load.average.1m:value": 4.71533203125, "tomcat.sessions.active.current:value": 0.0, "tomcat.sessions.active.max:value": 0.0, "tomcat.sessions.alive.max:value": 0.0, "tomcat.sessions.created:count": 0.0, "tomcat.sessions.expired:count": 0.0, "tomcat.sessions.rejected:count": 0.0 } }      
I have the install file for ver 3.1 and it had the eventgen.conf. 5.1 does not have eventgen.conf. I do not see in the release notes when this was removed. How do I go about using this app to generat... See more...
I have the install file for ver 3.1 and it had the eventgen.conf. 5.1 does not have eventgen.conf. I do not see in the release notes when this was removed. How do I go about using this app to generate events?
Hello, This is regarding AppD integration with cognos analytics 11.1.7 After setting up both machine and app agent, everything seems to be working ok with both application and AppD monitoring but i... See more...
Hello, This is regarding AppD integration with cognos analytics 11.1.7 After setting up both machine and app agent, everything seems to be working ok with both application and AppD monitoring but i'm seeing the following errors in the application logs. Has anyone come across these or have any inputs on how to resolve them. This is the AppD version: appd_22.5.0.tar.gz 2023-03-15T18:07:00.263+0000 ERROR com.cognos.pogo.bibus.BIBusCommand [Default Executor-thread-47] Cs2Gsd2yvjv9s4lwq2lqq449vM4lsq2M2yl8yGdv Cs2Gsd2yvjv9s4lwq2lqq449vM4lsq2M2yl8yGdv Mhsl9sGh9w2898CG9yMMjwwsvyCvvGlhjC8jlwdw NA 10.60.19.137 40442 NA IBM Cognos 9848 Set-Cookie not added to the response envelope for following: set-cookie: ADRUM_BTa=R:52|g:229b0857-9dbd-4579-b29e-89375bc624ac|n:xxxxx-test_469db7a0-d416-4fa5-835e-e26c3316559d; Expires=Thu, 01 Dec 1994 16:00:00 GMT; Path=/, ADRUM_BT1=R:52|i:14977382|e:1052; Expires=Thu, 01 Dec 1994 16:00:00 GMT; Path=/ org.apache.commons.httpclient.HttpException: Unable to parse expiration date parameter: "Thu" at com.cognos.pogo.util.MyCookie.parse(MyCookie.java:148) ~[p2pd.jar:?] at com.cognos.pogo.util.MyCookie.parse(MyCookie.java:59) ~[p2pd.jar:?] at com.cognos.pogo.pdk.BIBusEnvelope.addSetCookies(BIBusEnvelope.java:1140) ~[p2pd.jar:?] at com.cognos.pogo.bibus.BIBusCommand.processSetCookie(BIBusCommand.java:544) [p2pd.jar:?] at com.cognos.pogo.bibus.BIBusCommand.handleResponse(BIBusCommand.java:535) [p2pd.jar:?] at com.cognos.pogo.bibus.BIBusCommand.processResponse(BIBusCommand.java:263) [p2pd.jar:?] at com.cognos.pogo.bibus.BIBusCommand.executeCommand(BIBusCommand.java:218) [p2pd.jar:?] at com.cognos.pogo.bibus.BIBusCommand.execute(BIBusCommand.java:201) [p2pd.jar:?] at com.cognos.pogo.handlers.contentmanager.CMHandler.executeCmCommand(CMHandler.java:168) [p2pd.jar:?] at com.cognos.pogo.handlers.contentmanager.CMHandler.invokeImpl(CMHandler.java:152) [p2pd.jar:?] at com.cognos.pogo.pdk.BasicHandler.invoke(BasicHandler.java:203) [p2pd.jar:?] at com.cognos.pogo.handlers.logic.ChainHandler.invokeImpl(ChainHandler.java:53) [p2pd.jar:?] at com.cognos.pogo.pdk.BasicHandler.invoke(BasicHandler.java:203) [p2pd.jar:?] at com.cognos.pogo.auth.NewAuthHandler.invokeImpl(NewAuthHandler.java:126) [p2pd.jar:?] at com.cognos.pogo.pdk.BasicHandler.invoke(BasicHandler.java:203) [p2pd.jar:?] at com.cognos.pogo.handlers.logic.IfHandler.invokeImpl(IfHandler.java:56) [p2pd.jar:?] at com.cognos.pogo.pdk.BasicHandler.invoke(BasicHandler.java:203) [p2pd.jar:?] at com.cognos.pogo.handlers.logic.ChainHandler.invokeImpl(ChainHandler.java:53) [p2pd.jar:?] at com.cognos.pogo.pdk.BasicHandler.invoke(BasicHandler.java:203) [p2pd.jar:?] at com.cognos.pogo.impl.PogoEngineImpl.invokeHandler(PogoEngineImpl.java:158) [p2pd.jar:?] at com.cognos.pogo.handlers.engine.ServiceLookupHandler.invokeImpl(ServiceLookupHandler.java:127) [p2pd.jar:?] at com.cognos.pogo.pdk.BasicHandler.invoke(BasicHandler.java:203) [p2pd.jar:?] at com.cognos.pogo.handlers.logic.ChainHandler.invokeImpl(ChainHandler.java:53) [p2pd.jar:?] at com.cognos.pogo.pdk.BasicHandler.invoke(BasicHandler.java:203) [p2pd.jar:?] at com.cognos.pogo.handlers.performance.PerformanceIndicationHandler.invokeImpl(PerformanceIndicationHandler.java:118) [p2pd.jar:?] at com.cognos.pogo.pdk.BasicHandler.invoke(BasicHandler.java:203) [p2pd.jar:?] at com.cognos.pogo.impl.PogoEngineImpl.service(PogoEngineImpl.java:126) [p2pd.jar:?] at com.cognos.pogo.transport.PogoServlet.processRequest(PogoServlet.java:273) [p2pd.jar:?] at com.cognos.pogo.transport.PogoServlet.doPost(PogoServlet.java:736) [p2pd.jar:?] at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) [com.ibm.websphere.javaee.servlet.3.1_1.0.58.jar:?] at com.cognos.pogo.pdk.performance.servlet.PerformanceIndicatorWrappedServlet.service(PerformanceIndicatorWrappedServlet.java:31) [p2pd.jar:?] at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) [com.ibm.websphere.javaee.servlet.3.1_1.0.58.jar:?] at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1258) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:746) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:443) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.filter.WebAppFilterChain.invokeTarget(WebAppFilterChain.java:193) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:98) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.bi.logging.glug.support.web.BITransactionFilter.doFilter(BITransactionFilter.java:68) [glug-support.jar:11.1.7-22091315] at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:201) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:91) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.bi.logging.glug.support.web.BITransactionFilter.doFilter(BITransactionFilter.java:68) [glug-support.jar:11.1.7-22091315] at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:201) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:91) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:1002) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1140) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1011) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:75) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:938) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:279) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:1159) [com.ibm.ws.transport.http_1.0.58.jar:?] at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.wrapHandlerAndExecute(HttpDispatcherLink.java:428) [com.ibm.ws.transport.http_1.0.58.jar:?] at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.ready(HttpDispatcherLink.java:387) [com.ibm.ws.transport.http_1.0.58.jar:?] at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:566) [com.ibm.ws.transport.http_1.0.58.jar:?] at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleNewRequest(HttpInboundLink.java:500) [com.ibm.ws.transport.http_1.0.58.jar:?] at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.processRequest(HttpInboundLink.java:360) [com.ibm.ws.transport.http_1.0.58.jar:?] at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.ready(HttpInboundLink.java:327) [com.ibm.ws.transport.http_1.0.58.jar:?] at com.ibm.ws.tcpchannel.internal.NewConnectionInitialReadCallback.sendToDiscriminators(NewConnectionInitialReadCallback.java:167) [com.ibm.ws.channelfw_1.0.58.jar:?] at com.ibm.ws.tcpchannel.internal.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:75) [com.ibm.ws.channelfw_1.0.58.jar:?] at com.ibm.ws.tcpchannel.internal.WorkQueueManager.requestComplete(WorkQueueManager.java:504) [com.ibm.ws.channelfw_1.0.58.jar:?] at com.ibm.ws.tcpchannel.internal.WorkQueueManager.attemptIO(WorkQueueManager.java:574) [com.ibm.ws.channelfw_1.0.58.jar:?] at com.ibm.ws.tcpchannel.internal.WorkQueueManager.workerRun(WorkQueueManager.java:958) [com.ibm.ws.channelfw_1.0.58.jar:?] at com.ibm.ws.tcpchannel.internal.WorkQueueManager$Worker.run(WorkQueueManager.java:1047) [com.ibm.ws.channelfw_1.0.58.jar:?] at com.ibm.ws.threading.internal.ExecutorServiceImpl$RunnableWrapper.run(ExecutorServiceImpl.java:238) [com.ibm.ws.threading_1.1.58.jar:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_311] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_311] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_311]
Hi, Let's say that I have a database with a table like this : I would like to know if it is possible to acces and store the "New" and "Closed" status in some variables in order to draw a ... See more...
Hi, Let's say that I have a database with a table like this : I would like to know if it is possible to acces and store the "New" and "Closed" status in some variables in order to draw a chart based on how much new items were created and how much were closed in the same time ? Thank you for your help, Oulebsir Kiman
we are trying to follow this example . however we are getting this error.  {"message": "Error provisioning account on any cluster" Ofcourse we are using the correct account name api key  any ideas... See more...
we are trying to follow this example . however we are getting this error.  {"message": "Error provisioning account on any cluster" Ofcourse we are using the correct account name api key  any ideas? we are using our controller url  POST https://analytics.api.example.com/events/publish/{schemaName} X-Events-API-AccountName:<global_account_name> X-Events-API-Key:<api_key> Content-Type: application/vnd.appd.events+json;v=2 Accept: application/vnd.appd.events+json;v=2 { "schema" : { "account": "integer", "amount": "float", "product": "string" } }